Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Published · 4-minute read

Choosing between Theories

Kieren has asked about how to choose between (conflicting) theories:

Well, aside from violent shakings :) a path forward for me would be a Popperian solution to the practical problem of induction (choosing between different theories for practical purposes).

http://www.homepages.ucl.ac.uk/~ucessjb/Salmon.pdf

I’ve skimmed the beginning of the paper by Salmon which Kieren linked to, and found this sentence on p. 117 noteworthy:

[…] Popper’s account of scientific knowledge involves generalisations and their observational tests.

That sounds like a misrepresentation of Popper’s account of scientific knowledge, which account is not about generalizations, but explanations, which can’t be obtained by generalizing, only through guesses and criticism.

Salmon references a work I do not own and do not wish to purchase at this time, so it’s hard to say whether he’s wrong or I’m wrong. Which brings us, again, to the question of how to choose between conflicting theories (or claims in this case). The problem is that of breaking symmetry, which is an idea by Elliot Temple, see Curiosity – Symmetry and Curiosity – Epistemology.

Just knowing that two ideas conflict doesn’t tell you which one is wrong (assuming they really do conflict, and assuming only one of them is wrong). As Elliot writes:

“X contradicts Y” means that “Y contradicts X”.

So the ideas are symmetric in that way, and to make progress, you need to find a way of “breaking the symmetry”, as Elliot calls it.

Justificationism, for example, serves as a way to break symmetry. You can ask: which idea has received more support/is better justified etc? If one believes that justificationism can do this job, then one won’t want to get rid of it without replacement. Which is fair, and which is why it’s not enough to point out to people that justificationism is false. They still need a way to break symmetries, so an alternative is needed.

In his book The Beginning of Infinity (chapter 1), David Deutsch suggests looking at how “hard to vary” an explanation is. As in: can we make arbitrary changes to an explanation without it losing its ability to explain the phenomenon it purports to explain? This is useful, but when comparing two different explanations, I know of no way to methodically compare their ‘hardness to vary’. In some cases it’s more or less apparent – like when comparing, as Deutsch does, a Greek myth that ‘explains’ the seasons by invoking gods to today’s axis-tilt theory: you could replace one Greek god with another and you’d still be able to explain the seasons. The axis-tilt theory, on the other hand, is hard to vary without it breaking apart. It’s not easy to replace the earth’s axis with something else and not ruin the explanation in the process. But when comparing other theories, breaking the symmetry using ‘hardness to vary’ can be more difficult, particularly when both seem roughly equally hard or easy to vary.

For example, Kieren is looking for a way to break symmetry between the two opposing claims ‘consciousness requires creativity’ and ‘consciousness does not require creativity’. Deutsch has spoken in favor of the former:

My guess is that every AI is a person: a general-purpose explainer. It is conceivable that there are other levels of universality between AI and ‘universal explainer/constructor’, and perhaps separate levels for those associated attributes like consciousness. But those attributes all seem to have arrived in one jump to universality in humans, and, although we have little explanation of any of them, I know of no plausible argument that they are at different levels or can be achieved independently of each other. So I tentatively assume that they cannot.

The Beginning of Infinity, chapter 7

For clarity, one cannot be a general-purpose explainer without being creative. So Deutsch argues that creativity (at least the universal kind, if there are non-universal kinds) makes a general-purpose explainer, which in turn leads to consciousness.

That means Deutsch breaks the symmetry in two ways:

  1. Consciousness (along with other attributes) seems to have arrived in humans together with humans’ ability to explain things. (I have outlined, in detail, how this may have happened.)
  2. He knows of “no plausible argument” that consciousness is at a different level than creativity or can be achieved without it.

Notably, Deutsch does not use his ‘hard to vary’ criterion to break the symmetry here. He instead invokes a historical guess alongside a lack of alternatives.

Something else you could do is find a contradiction within one of the claims. Or you could find that it conflicts with background knowledge which you currently (and tentatively) deem uncontroversial. (Technically, finding a contradiction is a special case of that, since rejecting contradictions in favor of consistency is an approach that is part of our background knowledge. (Maybe all symmetry breaking involves comparisons with background knowledge in some way?))

For example, I have been asked how I decide between two related claims: that consciousness arises from all information processing vs. just some information processing (namely the creative kind).

I opted to show that the former claim conflicts with background knowledge: if consciousness arises from all information processing, even things like calculators must be conscious. But our best explanations of how calculators work, which are very good and part of our background knowledge in this case, don’t invoke consciousness, so we should conclude that calculators are not conscious. Therefore, it cannot be true that all information processing results in consciousness. We can even build calculators – and people do so all the time – without understanding how consciousness works. (Whereas, if Deutsch is right that consciousness arises from creativity, then we can’t, say, build artificial general intelligence without understanding how consciousness works.)

For the related claim that consciousness requires creativity, here’s how I break the symmetry: consciousness is a property of information processing. All information processing people have done so far (except in their minds) is execution only, not creative, and, like with calculators, does not lead to consciousness. Then there’s the problem with Lamarckism: that the mere execution of existing knowledge cannot result in new knowledge. So I ask: if ‘execution-only’ information processing cannot be where consciousness lives, the only place we have left to go is creative information processing, do we not? I know of no other kind of information processing. (Maybe there’s a ‘destructive’ kind, in the sense of wiping memory on a computer, but destruction can be automated, so it seems to fall under the execution-only kind.)

In other words, we simply run out of alternatives. There seem to be only two: execution-only information processing and creative information processing. Our best explanations of execution-only information processing do not invoke consciousness, so the only place left to go is creative information processing. And with the latter, there’s much more room left since we don’t really understand creative information processing at all while we do understand execution-only information processing pretty well.

I’m not sure breaking the symmetry can be boiled down to a recipe. I’m guessing it is itself a creative act and you can always find new ways to do it. In the context of my neo-Darwinian approach to the mind, the idea that breaks symmetry is the one that has spread through the mind at the expense of its rivals, and whose total number of copies is therefore greater than that of any one of its rivals.


References

This post makes 1 reference to:

There are 3 references to this post in:


What people are saying

Hey Dennis, thanks for taking the time to elaborate your response. So as to prevent the conversation from branching out exponentially I will focus my comments on what I see as the crux of our disagreement. Let me know if there is something I have missed which you would like me to address.

I think that the argument provided by Salmon articulates well the reason why I do not adopt a Popperian epistemology. If you want to refute the criticism laid out by Salmon, then I think you will need to read the first 8 pages - up to page 122 where a neat summary of the criticism is provided.

The rest of my response will be in regards to consciousness.

What Elliot and yourself refer to as breaking symmetry I would describe as ‘providing reasons in favour of one claim over another’, would you agree? The question then becomes, what reasons do you provide for your claim (creativity is required for consciousness)? which I think you have answered in your post.

You provide a DD quote that I am familiar with, which as you identify, provides two reasons in favor of the claim (two reasons to break symmetry).

DD does not appear to elaborate on reason 1, but I don’t currently have my copy of BoI with me to verify this. I am unsure of what facts about the world lead him to think that things “seem” the way he suggests. Your post about your theory of mind doesn’t seem to elaborate on reason 1 either, only speculating about where consciousness might fit in. I won’t consider reason 1 further unless you wish to elaborate on it yourself.

On first impression of reason 2 it appears false, since there is at least one alternative (plausible in my view) - creativity is NOT required for consciousness. Unless perhaps there is some fact about the world that the claim ‘creativity is required for consciousness’ explains so well that it would be implausible to think otherwise. If you can produce such a fact then I will concede that your claim has good reasons in its favour (breaks symmetry in its favour).

You do seem to suggest one such fact (a curious circumstance that is explained by your claim).

if consciousness arises from all information processing, even things like calculators must be conscious. But our best explanations of how calculators work, which are very good and part of our background knowledge in this case, don’t invoke consciousness, so we should conclude that calculators are not conscious. Therefore, it cannot be true that all information processing results in consciousness.

I do not accept this fact, because I do not accept that we know that calculators are not conscious just because our best theories do not invoke consciousness. Our best theories explaining how human brains work (neuroscience) do not invoke consciousness (except as something to be explained), but we do not conclude that we are not conscious.

#104 · Kieren (people may not be who they say they are) · · Referenced in comment #586
Reply

I think that the argument provided by Salmon articulates well the reason why I do not adopt a Popperian epistemology. If you want to refute the criticism laid out by Salmon, then I think you will need to read the first 8 pages - up to page 122 where a neat summary of the criticism is provided.

I may do that if I am wrong about Salmon misrepresenting Popper’s account of scientific knowledge. If I’m not wrong about that, Salmon’s misrepresentation seems grave enough that it’s reasonable to expect not much of value to be gathered from his text. So – am I wrong?

What Elliot and yourself refer to as breaking symmetry I would describe as ‘providing reasons in favour of one claim over another’, would you agree?

Although this can sometimes, in effect, be what one ends up doing, I think the approach is a critical one, with the goal of eliminating one of the conflicting theories, not elevating the other in some way by providing support for it.

DD does not appear to elaborate on reason 1, but I don’t currently have my copy of BoI with me to verify this.

I believe you’re correct.

if consciousness arises from all information processing, even things like calculators must be conscious. But our best explanations of how calculators work, which are very good and part of our background knowledge in this case, don’t invoke consciousness, so we should conclude that calculators are not conscious. Therefore, it cannot be true that all information processing results in consciousness.

I do not accept this fact, because I do not accept that we know that calculators are not conscious just because our best theories do not invoke consciousness.

This is a variation on Deutsch’s criterion of reality. From The Beginning of Infinity, chapter 1:

[W]e should conclude that a particular thing is real if and only if it figures in our best explanation of something.

We need some way to determine, tentatively, whether calculators are conscious. Going off of whether our best explanations tell us they are is a good way, I think. And no matter which way we choose, we can always say ‘but they still might be conscious’ – but then we never break the symmetry. In other words: yes, it’s always possible to be mistaken about how to break the symmetry, but one has to try one way or another. I think the fact that our best explanations of calculators – which are fantastic since we have invented them and know how to build and control them – don’t mention consciousness is an almost irrevocably fatal blow to the idea that calculators are conscious, only to be reconsidered if our explanations of calculators change accordingly.

Additionally, there are no big unknowns in our understanding of how calculators work, neither their hardware nor software. With the brain that’s different – when it comes to the brain’s hardware (well, wetware), in addition to being a universal computer, it seems to have all kinds of special-purpose information-processing hardware built in and connected to it (like eyes), some of which we don’t understand well yet. But those are not important for consciousness, and we do understand universal computers well, be they made of wetware or hardware.

Then you wrote:

Our best theories explaining how human brains work (neuroscience) do not invoke consciousness (except as something to be explained), but we do not conclude that we are not conscious.

Well, the parenthetical “(except as something to be explained)” makes all the difference here. Our explanations of calculators don’t have that gaping hole. (Though technically that gaping hole lies not in our explanations of brain hardware but brain software. So, to be clear, and for the comparison to work, when I speak of explanations of calculators, I really mean explanations of their software. For calculators we have great explanations for both their hardware and their software. For the human brain as a universal computer we have great explanations, while for some of its software, especially creativity and consciousness, we do not.)

All that said, I believe your condition of providing “some fact about the world that the claim ‘creativity is required for consciousness’ explains so well that it would be implausible to think otherwise” is still met.

#105 · dennis (verified commenter) · in response to comment #104 · Referenced in comment #588
Reply

I may do that if I am wrong about Salmon misrepresenting Popper’s account of scientific knowledge. If I’m not wrong about that, Salmon’s misrepresentation seems grave enough that it’s reasonable to expect not much of value to be gathered from his text. So – am I wrong?

I think it is a little harsh to dismiss the paper from one sentence, but I also understand that no one has time to read every paper that random people on the internet offer up. Anyway, I do think you are wrong in your assessment. My understanding is that Salmon uses the word ‘generalization’ to refer to Popper’s ‘universal statement’ or ‘theory’, and not in the other sense (such as generalizing an idea from a series of observations). So I read this quote as referring to Popper’s account of the asymmetry between justification and falsification, the idea of refuting a universal statement (generalization) with a bonafide counterexample (falsifying observation). This is confirmed by the sentence which follows afterwards.

This is a variation on Deutsch’s criterion of reality. From The Beginning of Infinity, chapter 1:

[W]e should conclude that a particular thing is real if and only if it figures in our best explanation of something.

You are right that all the experiences I have in regards to calculators are explained without reference to it being conscious, but using this fact to conclude that it doesn’t exist seems unfair since our current understanding of the nature of consciousness is that I would not have experiences of it (that would require explaining) even if the calculator did have consciousness.

The reason I do not think calculators have consciousness is not because it is a part of my best explanations, but instead because it does not follow from my best explanations (currently accepted knowledge).

For example. Consider planets that are hidden so many light years away that we do not have any experience of them. The reason I believe that such planets exist isn’t because they are involved in my explanations of things experienced (they aren’t), but instead I believe in their existence because they follow from my best explanations of things experienced.

I think the way DD puts it is explaining the unseen from our explanations of the seen.

So when It comes to human and animal consciousness, one line of reasoning that follows from my currently accepted knowledge looks like this.
Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

So while the idea that creativity is required for consciousness would explain the fact that calculators do not have consciousness (which follows from my current background knowledge), it does not explain why uncreative animals are conscious (which also follows from my current background knowledge).

Therefore, the idea of creativity being the driver of consciousness does not explain the facts at hand so well. I think one path going forward would be for us to refute my current background knowledge.

#108 · Kieren (people may not be who they say they are) · in response to comment #105 · Referenced in comment #492
Reply

I think it is a little harsh to dismiss the paper from one sentence […]

We need not worry about people’s sensibilities when deciding whether to continue reading their papers. Imagine someone publishes a book called ‘How to Do Basic Arithmetic’ and then claims somewhere on the first few pages that 2 + 2 = 5. You’d put the book down.

That said, I now think I was mistaken, and I did read Salmon’s text through page 122, as you suggested (a bit further actually):

If, however, we make observations and perform tests, but no negative instance is found, all we can say deductively is that the generalisation in question has not been refuted.

Yes.

In particular, positive instances do not provide confirmation or inductive support for any such unrefuted generalisation.

Yes.

At this stage, I claim, we have no basis for rational prediction. Taken in themselves, our observation reports refer to past events, and consequently they have no predictive content. They say nothing about future events.

OK, this is basically Hume’s statement of the problem of induction. But Salmon is wrong to conclude that we have “no basis for rational prediction”. If he’s looking for justification, he’s simply mistaken that that’s needed (or possible). If he’s claiming that prediction is not possible at this stage, he’s mistaken about how theories work. One needs a theory (“generalisation”) before one can perform any tests. If the theory didn’t make any prediction before testing, how would you know what to compare your test results against? A theory alone suffices to make predictions. If you roughly know, from theory, how the earth moves, you can and will predict that the sun will rise tomorrow, even if you have never observed a sunrise before. Lastly, if, on the other hand, he’s claiming that one cannot know whether the theory will continue to make true (or false) predictions in the future – meaning one cannot make reliable predictions about the theory’s predictions – then he’s correct to claim that, but wrong to assert that there’s a problem with that. This is only a problem for someone who’s looking for reliable knowledge, which cannot exist.

My aim is to emphasise that, even if we are entirely justified in letting such considerations determine our theoretical preferences, it is by no means obvious that we are justified in using them as the basis for our preferences among generalisations which are to be used for prediction in the practical decision-making context.

Evidence of him being a justificationist.

Conjectures, hypotheses, theories, generalisations—call them what you
will—do have predictive content.

This convinced me that by “generalisation” he means ‘conjecture’ or ‘theory’.

What I want to see is how corroboration could justify such a preference.

Even more evidence of him being a justificationist. Immediately afterwards, he says:

Unless we can find a satisfactory answer to that question, it appears to me that we have no viable theory of rational prediction […]

He’s saying, in effect, that what isn’t justified isn’t rational. This is a bad mistake (and an age-old one at that).

But if every method is equally lacking in rational justification, then there is no method which can be said to furnish a rational basis for prediction, for any prediction will be just as unfounded rationally as any other.

At this point, he’s basically stuck. He’s trying to force Popperian epistemology into a justificationist/inductivist straight jacket and then wonders why that can’t work. He also comes dangerously close to relativism.

We do have reasons for – or rather, means of – preferring some methods over others, namely by elimination through criticism. For example, you wouldn’t flip a coin (his example) to decide on a theory, because by the same method a conflicting theory could be ‘shown’ to be true as well. And the same theory could be ‘shown’ to be false shortly after, and then flip back and forth. So that can’t work, because we know – also from theory – that reality doesn’t flip like that. And you can’t choose the method of sorting theories alphabetically (also his example) because then their truthiness would depend on spelling, and reality doesn’t care about how we spell things. Importantly, justificationism can’t work because it leads to an infinite regress, and we know – again from theory – to reject infinite regresses.

If you keep eliminating methods this way, pretty soon you are left with very few, maybe only one, way of choosing whether to tentatively consider a theory true and whether to act on it. I think that’s why Popper put such emphasis on criticism: it’s not just theories we can criticize, but also our methods of evaluating theories (which are themselves theories), our preferences for doing so (ditto), etc.

Related to that, Deutsch writes in ch. 13 of The Beginning of Infinity:

During the course of a creative process, one is not struggling to distinguish between countless different explanations of nearly equal merit; typically, one is struggling to create even one good explanation, and, having succeeded, one is glad to be rid of the rest.

I think you could reformulate this quote as follows so it applies to the issue at hand: ‘During the course of a creative process, one is not struggling to distinguish between countless different methods of nearly equal merit for judging conflicting theories; typically, one is struggling to create even one good method, and, having succeeded, one is glad to be rid of the rest.’

In light of that, after I read on a bit, I found that Salmon quotes Popper on p. 123 as saying:

Thus the rational decision is always: adopt critical methods which have themselves withstood severe criticism […]

This is precisely the conclusion at which I have arrived independently above.

I will say: Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. But I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.

An article that may interest you is this one by Elliot, which collects several different articles on the topic of how to resolve conflicts between ideas rationally. (I have not read the linked articles yet apart from the one I mention below.) Note that this is slightly different from Salmon’s problem of rational prediction in particular – and I think he’s mistaken in his focus on prediction over explanation – but it seems to me that once you have rationally chosen an idea, you can rationally make predictions using that idea.

There’s also this article by Elliot, which you may wish to read first, in which he writes:

The idea of a critical preference is aimed to solve the pragmatic problem: how should we proceed while there is a pending conflict between non-refuted theories?

Which sounds right up your alley since it’s about the problem of practical decision-making as referenced by Salmon.

I plan to read these articles myself, and if you like, it could be fun and fruitful to compare notes and maybe discuss further afterwards.

Regarding the calculator stuff, I think it’s notable that you commented on your experiences involving calculators quite a bit (the word ‘experience’ and variants thereof appear five times in your most recent comment). In particular, you wrote:

You are right that all the experiences I have in regards to calculators are explained without reference to it being conscious […]

But that’s not what I said. I made no claims about your experiences (claims about something subjective/psychological), only about how calculators work (claims about something objective/epistemological).

In addition to calculators, there’s also the issue with Lamarckism I mentioned, which is an important factor in breaking symmetry in favor of the idea that execution-only information processing, to which animals seem to be constrained, cannot create new knowledge.

Then you wrote:

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

If somebody pointed out that this isn’t logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?

[T]he idea that creativity is required for consciousness […] does not explain why uncreative animals are conscious […]

Well, you can hardly criticize a theory for not doing something it’s not meant to do!

#109 · dennis (verified commenter) · in response to comment #108 · Referenced in comments #490, #588, #589
Reply

I noticed that you’ve discussed with Elliot underneath his ‘Rationally Resolving Conflicts of Ideas’ article quite a bit, so maybe you’re already familiar with some of the linked essays.

#110 · dennis (verified commenter) · in response to comment #109
Reply

Thanks for taking the time to look further into the Salmon paper :)

My aim is to emphasise that, even if we are entirely justified in letting such considerations determine our theoretical preferences, it is by no means obvious that we are justified in using them as the basis for our preferences among generalisations which are to be used for prediction in the practical decision-making context.

Evidence of him being a justificationist.

Popperians come across as if they are allergic to the words “justification”, “support”, etc. Justified doesn’t have to mean something is proven infallibly true. It can just mean that it has reasons in its favour. Examples of positive reasons are “this theory explains a surprising fact about the world”, “this theory has survived falsification attempts”, or “this theory is good because it is falsifiable”. So I don’t think his use of the word “justified” is grounds for dismissing his argument.

Unless we can find a satisfactory answer to that question, it appears to me that we have no viable theory of rational prediction […]

He’s saying, in effect, that what isn’t justified isn’t rational. This is a bad mistake (and an age-old one at that).

Not a mistake if justified doesn’t mean infallibly proven true. What is without reason is unreasonable/irrational. What he is saying is that the use of corroboration for preferencing theories is without reason.

We do have reasons for – or rather, means of – preferring some methods over others, namely by elimination through criticism. For example, you wouldn’t flip a coin (his example) to decide on a theory, because by the same method a conflicting theory could be ‘shown’ to be true as well. And the same theory could be ‘shown’ to be false shortly after, and then flip back and forth. So that can’t work, because we know – also from theory – that reality doesn’t flip like that. And you can’t choose the method of sorting theories alphabetically (also his example) because then their truthiness would depend on spelling, and reality doesn’t care about how we spell things.

Your invoking reality (as per your best theories) to dismiss these alternative methods, but if your best theories were arrived at through Popper’s critical rationalism then they were established by a method which is equally empty of reasons supporting its predictions. So I can in turn dismiss what your reality says about the future results of the alternate methods.

It would be as if I did a coin toss to decide if predictions following from Popper’s methodology are any good and the coins told me no, so then I use this result to refute the predictions made by Popperians.

In light of that, after I read on a bit, I found that Salmon quotes Popper on p. 123 as saying:

Thus the rational decision is always: adopt critical methods which have themselves withstood severe criticism […]

This is precisely the conclusion at which I have arrived independently above.

For this, I appreciate Salmon’s response.

When he says, ‘The answer … is exactly the same as before … the rational decision is always: adopt critical methods which have themselves withstood severe criticism,’ he seems to be saying that we should adopt his methodological recommendations, because they have ‘withstood severe criticism’. But his answer is inappropriate in this context because our aim is precisely to subject his philosophical views, in the best Popperian spirit, to severe criticism.

end quote.

I will say: Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. But I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.

Corroboration lets us know which of our yet-to-be-falsified theories to prefer for practical predictions. So there seems to be a gap that needs filling.

I plan to read these articles myself, and if you like, it could be fun and fruitful to compare notes and maybe discuss further afterwards.

That quote does sound very relevant. I’ll have a look and let you know what I think. Perhaps it is the alternative/replacement to corroboration, but if it is relying on Elliot’s yes/no philosophy then I don’t think it will provide enough.

But that’s not what I said. I made no claims about your experiences (claims about something subjective/psychological), only about how calculators work (claims about something objective/epistemological).

Ah yes I see. I guess I wanted to highlight the fact that we share the experience of calculators whilst experience of consciousness is private. I should have worded it as such.

In addition to calculators, there’s also the issue with Lamarckism I mentioned, which is an important factor in breaking symmetry in favor of the idea that execution-only information processing, to which animals seem to be constrained, cannot create new knowledge.

There is an infinite space of programs that could be executed. Some of them we would call creative, but without any reason to think creativity is especially linked to consciousness, then I think the remainder of the space has programs that are just as plausibly conscious. For example, what if a level of self referential modelling within a program conjures up consciousness? Or if certain kinds of (possibly hardware dependent) reinforcement learning algorithms conjure consciousness? Or more abstractly, some other form of program that we have not yet imagined, but which has evolved to perform certain operations of the brain.

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

If somebody pointed out that this isn’t logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?

Yes, if the criticism is general enough to refute the other couple variations of this sort of reasoning that leads me to believe other animals are conscious.

[T]he idea that creativity is required for consciousness […] does not explain why uncreative animals are conscious […]

Well, you can hardly criticize a theory for not doing something it’s not meant to do!

The problem is that we have my (and I assume many others) background knowledge B with which we infer facts F1 and F2.
F1: Calculators are uncreative and are not conscious.
F2: Animals are uncreative and conscious.

So whilst F1 breaks symmetry in your favour, F2 is its undoing.

I think a definite way forward is to dig into the reasoning that leads people to believe other animals are conscious.

#117 · Kieren (people may not be who they say they are) · in response to comment #110
Reply

Popperians come across as if they are allergic to the words “justification”, “support”, etc.

Yes, because we take seriously what Deutsch wrote in ch. 10 of BoI:

So the thing [justificationists] call ‘knowledge’, namely justified belief, is a chimera. It is unattainable to humans except in the form of self-deception; it is unnecessary for any good purpose; and it is undesired by the wisest among mortals.

(Upon further reflection, I don’t like the last part-sentence, as there’s a bit of intimidation going on there.)

Back to your comment:

[…] I don’t think his use of the word “justified” is grounds for dismissing his argument.

It is, for the reason I have mentioned: he thinks that what isn’t justified isn’t rational. The view that beliefs should be justified isn’t justified. So it doesn’t pass the ‘mirror test’, as Logan Chipkin calls it.

Corroboration lets us know which of our yet-to-be-falsified theories to prefer for practical predictions. So there seems to be a gap that needs filling.

What’s interesting is that I’ve never run into a situation where I wished I had corroboration to help me break symmetry. I also don’t seem to run into situations much where multiple viable yet conflicting theories are left over. The only situation I can remember off the top of my head is thinking that non-creative processes might give rise to consciousness, so I couldn’t yet break symmetry in favor of the notion that only creative processes do, but then I found a refutation to that claim (I consider it a refutation – others might not).

But yea either way maybe what Elliot’s written fills the gap. I have yet to read it.

Regarding Salmon’s remark about being critical of critical methods:

But [Popper’s] answer is inappropriate in this context because our aim is precisely to subject his philosophical views, in the best Popperian spirit, to severe criticism.

To be clear, you think Salmon’s saying that Popper presupposes the thing we wish to be critical of, namely being critical?

Perhaps it is the alternative/replacement to corroboration, but if it is relying on Elliot’s yes/no philosophy then I don’t think it will provide enough.

Why not? (I haven’t studied yes/no philosophy.)

For example, what if a level of self referential modelling within a program conjures up consciousness?

If I had a nickel for every time I’ve heard this…

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

If somebody pointed out that this isn’t logically valid reasoning, would you consider that a candidate refutation of your background knowledge (as you suggested as a way forward)?

Yes, if the criticism is general enough to refute the other couple variations of this sort of reasoning that leads me to believe other animals are conscious.

Consider this variation. Say everyone owns an urn with colored beads in it, and say you can look only at the beads in your own urn (since consciousness is private, as you called it), and it is common knowledge that everyone owns urns:

‘Beads are present both in my urn and others’ urns.
My urn contains red beads in particular.
Therefore, red beads have a fair chance of being present in other people’s urns.’

See the problem?

I think a definite way forward is to dig into the reasoning that leads people to believe other animals are conscious.

Maybe later, as this discussion is already branching out too much, which makes it harder to address criticisms and make progress. I suggest focusing only on the bead example for now. We can always get to why people believe that animals are conscious later.

#119 · dennis (verified commenter) · in response to comment #117 · Referenced in comment #137
Reply

[…] I don’t think his use of the word “justified” is grounds for dismissing his argument.

It is, for the reason I have mentioned: he thinks that what isn’t justified isn’t rational. The view that beliefs should be justified isn’t justified. So it doesn’t pass the ‘mirror test’, as Logan Chipkin calls it.

In my previous response I provided some examples of “reasons” that could be used to justify/persuade someone. These were reasons that would be permitted even under a purely Popperian epistemology (“this theory explains a surprising fact”, “this theory has survived severe falsification attempts”, etc). Don’t you agree that a theory requires such reasons in its favour to be rationally deemed as a good theory?

If I claim you cannot enter your bedroom because there is a tiger in there, you will naturally ask for reasons why I think that. Perhaps I will describe what I heard or show you pictures of the tiger. I would call these reasons that justify my claim. Don’t you think such reasons are required for me to persuade you in such a situation?

What’s interesting is that I’ve never run into a situation where I wished I had corroboration to help me break symmetry. I also don’t seem to run into situations much where multiple viable yet conflicting theories are left over. The only situation I can remember off the top of my head is thinking that non-creative processes might give rise to consciousness, so I couldn’t yet break symmetry in favor of the notion that only creative processes do, but then I found a refutation to that claim (I consider it a refutation – others might not).

I think Popper explicitly presents the social sciences as a domain where corroboration is necessary. It is a science where we know the theories are not true, but instead approximations, or useful instruments for predicting social behaviours, wellbeing etc. One could conjecture all kinds of causal theories, but unless the theory is criticized with a number of critical tests, we have nothing to break its symmetry over its negation or competing alternatives. We find ourselves with multiple false theories, and we want to know which ones best approximates reality. E.g. is human happiness better fostered through meaningful work, income, mental health education, etc.

But [Popper’s] answer is inappropriate in this context because our aim is precisely to subject his philosophical views, in the best Popperian spirit, to severe criticism.

To be clear, you think Salmon’s saying that Popper presupposes the thing we wish to be critical of, namely being critical?

I don’t think the problem is Popper presupposing. The problem is his answer just doesn’t answer the question. To me it looks like this.

Q: I have criticism X of corroboration. How do you respond?
A: My idea of corroboration has survived all criticism.
His theory surviving previous criticism is irrelevant to deciding whether it survives this particular criticism right?

Perhaps it is the alternative/replacement to corroboration, but if it is relying on Elliot’s yes/no philosophy then I don’t think it will provide enough.

Why not? (I haven’t studied yes/no philosophy.)

What I’ve read of yes/no doesn’t go deep enough to answer these sorts of questions (too high level).

For example, what if a level of self referential modelling within a program conjures up consciousness?

If I had a nickel for every time I’ve heard this…

Do you have a refutation for this sort of idea and all similar variations? Perhaps in your FAQ?

Consider this variation. Say everyone owns an urn with colored beads in it, and say you can look only at the beads in your own urn (since consciousness is private, as you called it), and it is common knowledge that everyone owns urns:

‘Beads are present both in my urn and others’ urns.
My urn contains red beads in particular.
Therefore, red beads have a fair chance of being present in other people’s urns.’

See the problem?

That’s funny, I nearly provided a beads/jars analogy myself :)

I think your variation is missing an important aspect (you didn’t let us look into any other urns at all). I would instead put it like this.

Myself and others have a number of urns (numbered from 1 to N) with a bead inside each.
We opened and compared all but one of the urns.
Each of their urns was found to contain the same coloured bead as my urn of the same number.
My last urn contains a red bead.
Therefore their last urn probably contains a red bead.

#131 · kieren (people may not be who they say they are) · in response to comment #119
Reply

[…] Don’t you agree that a theory requires such reasons in its favour to be rationally deemed as a good theory?

If I claim you cannot enter your bedroom because there is a tiger in there, you will naturally ask for reasons why I think that. Perhaps I will describe what I heard or show you pictures of the tiger. I would call these reasons that justify my claim. Don’t you think such reasons are required for me to persuade you in such a situation?

Depending on the details of the situation, that may well be the case, but it’s the inverse that matters: it’s that the absence of such reasons would cause me to dismiss the claim that there’s a tiger in my room. Whether I then consider the presence of such ‘reasons’ a justification, or whether they satisfy me, is just psychological.

For example, if there really is a tiger in my room, then if I listen closely, I should hear growling or some other noises. At least eventually – maybe the tiger is currently sleeping. If I knocked on the door or agitated the tiger somehow I should be able to hear it.

Now, If I do hear growling that does not mean there really is a tiger in the room. It could be a recording, for example. Maybe it’s a prank. It could be any number of things. As Deutsch likes to say, there’s no limit to the size of error we can make.

Call failed refutations a reason in favor of a theory if you like – I think what’s important is that we have a critical attitude toward our theories.

I think Popper explicitly presents the social sciences as a domain where corroboration is necessary. It is a science where we know the theories are not true, but instead approximations, or useful instruments for predicting social behaviours, wellbeing etc. [emphasis added]

That doesn’t sound like Popper. It sounds like instrumentalism. But if you have a quote, I may change my mind. (Note the analogy to your tiger example here: I’m not asking for a reason your claim is true – it’s that, if your claim is true, then it should be possible to provide such a quote, whereas if it false, it should be impossible to provide such a quote.)

Q: I have criticism X of corroboration. How do you respond?
A: My idea of corroboration has survived all criticism.
His theory surviving previous criticism is irrelevant to deciding whether it survives this particular criticism right?

Yes. But I don’t think Popper would have given that answer A because he knew that past performance is no indication of future performance. He instead would have addressed criticism X directly, presumably.

For example, what if a level of self referential modelling within a program conjures up consciousness?

If I had a nickel for every time I’ve heard this…

Do you have a refutation for this sort of idea and all similar variations? Perhaps in your FAQ?

I’ve written a little bit about self-referential stuff in my book. I think discussing this bit further would take us down a mostly unrelated tangent but I do recommend reading the book in general.

Re the beads, I think your variation of my example needlessly breaks with consciousness being private, but yes it does contain the same problem. Do you see it?

#132 · dennis (verified commenter) · in response to comment #131
Reply

Upon reflection, I’ve realized that I made a mistake in a previous comment when I implied that consciousness has nothing to do with self-referentiality.

I do think self-referentiality is an important part of any conscious mind, since my neo-Darwinian approach to the mind introduces self-replicating ideas within a mind, and self-replication in turn depends on self-referentiality.

What I do not think is that recursion necessarily plays a role in consciousness, which seems to be a very popular theory.

The mistake was that I read “self referential” to mean ‘recursive’, and while recursion is necessarily self-referential, it’s not the only kind of self-referentiality there is, and another kind may well be an integral part to consciousness.

#137 · dennis (verified commenter) · in response to comment #132
Reply

Depending on the details of the situation, that may well be the case, but it’s the inverse that matters: it’s that the absence of such reasons would cause me to dismiss the claim that there’s a tiger in my room. Whether I then consider the presence of such ‘reasons’ a justification, or whether they satisfy me, is just psychological.

For example, if there really is a tiger in my room, then if I listen closely, I should hear growling or some other noises. At least eventually – maybe the tiger is currently sleeping. If I knocked on the door or agitated the tiger somehow I should be able to hear it.

Now, If I do hear growling that does not mean there really is a tiger in the room. It could be a recording, for example. Maybe it’s a prank. It could be any number of things. As Deutsch likes to say, there’s no limit to the size of error we can make.

We seem to agree here. It would be unreasonable to believe there’s a tiger in your room absent any reason/evidence. We both qualify this with the fact that the reason/evidence cannot mean we are not in error. I think Salmon is also in agreement too. When Salmon claims that corroboration has no basis, he is claiming that Popper’s claim is absent of reasons. I do not believe he is asking for an infallible justification. Salmon wants Popper to show that he has knocked on the door of corroboration, and that corroboration growled back.

That doesn’t sound like Popper. It sounds like instrumentalism. But if you have a quote, I may change my mind. (Note the analogy to your tiger example here: I’m not asking for a reason your claim is true – it’s that, if your claim is true, then it should be possible to provide such a quote, whereas if it false, it should be impossible to provide such a quote.)

The quote comes from Conjectures and Refutations. Note that corroboration is said to be an indication of verisimilitude (truth likeness).

“Ultimately, the idea of verisimilitude is most important in cases where we know that we have to work with theories which are at best approximations-that is to say, theories of which we actually know that they cannot be true. (This is often the case in the social sciences.) In these cases we can still speak of better or worse approximations to the truth (and we therefore do not need to interpret these cases in an instrumentalist sense).”

I think medicine is another domain where we often decide between theories based on their level of corroboration. We often don’t yet know the mechanism of action of our treatments, but we might know that one treatment has survived a greater level of empirical testing.

Yes. But I don’t think Popper would have given that answer A because he knew that past performance is no indication of future performance. He instead would have addressed criticism X directly, presumably.

Well, that seems to be the closest he came to addressing this particular criticism. If you think there is a better defence of corroboration I would love to hear it.

I do think self-referentiality is an important part of any conscious mind, since my neo-Darwinian approach to the mind introduces self-replicating ideas within a mind, and self-replication in turn depends on self-referentiality.

So would you agree that it is at least plausible that some form of self referential modelling occurring within the brain could be the cause of consciousness?

Re the beads, I think your variation of my example needlessly breaks with consciousness being private, but yes it does contain the same problem. Do you see it?

Sorry, I didn’t make it very clear, but the contents of the last earn is kept private as an analogy for consciousness. I’m not sure which problem you are referring to. The reasoning is definitely not the strongest, but it is enough to provide some likelihood to the conclusion, and that is all that is required for us to play it safe in regards to animal suffering.

#151 · Kieren (people may not be who they say they are) · · Referenced in post ‘Wrong-Number Pattern
Reply

Salmon wants Popper to show that he has knocked on the door of corroboration, and that corroboration growled back.

Did you mean to say ‘and that corroboration did not growl back’?

Regarding the quote from C&R: you previously said Popper claimed that the social sciences are “useful instruments for predicting social behaviours [or] wellbeing”. Your C&R quote doesn’t contain anything to this effect.

Well, that seems to be the closest he came to addressing this particular criticism.

Popper wants to “adopt critical methods which have themselves withstood severe criticism”. He would have addressed criticisms of corroboration. (I imagine there are examples of his doing so but I do not wish to look up the literature at this moment.)

If you think there is a better defence of corroboration I would love to hear it.

I don’t care to defend corroboration because I don’t need it.

So would you agree that it is at least plausible that some form of self referential modelling occurring within the brain could be the cause of consciousness?

It’s more than plausible: I wrote that “self-replication […] depends on self-referentiality” (emphasis added). But again, discussing this bit further would take us down a mostly unrelated tangent.

Btw you need a hyphen between “self” and “referential” in “self referential modelling”. In some cases hyphenation rules are confusing.

[…] the contents of the last earn is kept private […]

“urn” and “are”

As an aside, I’ve noticed lots of people making the mistake of using mismatching numbers for the verb and subject of a sentence if there’s another noun of a different number between them and therefore closer to the verb. It’s interesting grammatically. Maybe some people’s algorithm for determining the verb’s number is to use that of what they believe to be the closest preceding noun. In this case, that’s “earn”, which is singular, whereas the subject is “contents” (plural), so the verb should be plural as well. People shouldn’t use that algorithm because it doesn’t work in cases like the one above. They should instead look to the subject’s number, no matter how far away from the verb it is. If they have trouble remembering, that’s easy to correct in writing: just read the sentence again and look for the subject and its number. Or they can write shorter sentences, or they can structure their sentences such that their algorithm does work, for example: ‘the last earn’s contents are kept private’. When speaking it’s a bit harder; people could use shorter sentences so there’s less of a possibility of another noun separating the subject and verb, and with shorter sentences it’s easier to remember what the subject is while speaking.

I love languages and am interested in grammar, and writing well is an important skill, especially in discussions where misunderstandings are commonplace. (Though my notes on English writing and grammar should always be taken with a grain of salt since I’m not a native speaker.)

Sorry, I didn’t make it very clear, but the contents of the last earn is kept private as an analogy for consciousness.

My version is better because people shouldn’t be able to look into other people’s urns at all for that same privacy reason.

In any case, even drawing beads from a single urn, draw as many as you like, the drawn beads’ colors are no indication whatsoever for the next bead’s. That’s the problem.

[We should] play it safe in regards to animal suffering.

You seem to be advocating the precautionary principle, which, for the reasons Deutsch explains in BoI, is a bad idea.

#186 · dennis (verified commenter) · in response to comment #151 · Referenced in post ‘Wrong-Number Pattern’ and in comment #355
Reply

Sorry I took so long to reply. Busy over christmas and stuck into a few side projects at the moment.

Did you mean to say ‘and that corroboration did not growl back’?

No. The analogy for me is that I find Popper’s claim about corroboration absent of reasons, and therefore I dismiss it, likewise, you would dismiss my claim about the tiger because of an absence of reasons unless it did growl back (a reason), etc.

Regarding the quote from C&R: you previously said Popper claimed that the social sciences are “useful instruments for predicting social behaviours [or] wellbeing”. Your C&R quote doesn’t contain anything to this effect.

That was more my own description of social sciences. The key point is that Popper provided social sciences as an example of a domain where corroboration is necessary. I also provided the domain of medicine as another example.

I don’t care to defend corroboration because I don’t need it.
I disagree. How do you account for the fields of social sciences and medicine where we often make use of a theory because of its survival of past testing, not because we think it is true (we may know it NOT to be true, as Popper claims of social sciences)?

So would you agree that it is at least plausible that some form of self referential modeling occurring within the brain could be the cause of consciousness?

It’s more than plausible: I wrote that “self-replication […] depends on self-referentiality” (emphasis added). But again, discussing this bit further would take us down a mostly unrelated tangent.

Other than for self-replication, do you think self-referential modeling within the brain is a possible cause of consciousness?

I love languages and am interested in grammar, and writing well is an important skill, especially in discussions where misunderstandings are commonplace. (Though my notes on English writing and grammar should always be taken with a grain of salt since I’m not a native speaker.)

Thanks for the corrections and tips :)

Sorry, I didn’t make it very clear, but the contents of the last earn is kept private as an analogy for consciousness.

My version is better because people shouldn’t be able to look into other people’s urns at all for that same privacy reason.
Well, if the urns are an analogy for aspects of our biological selves, then most urns are made public (like you get to see that I have limbs, eyes, hands like yourself, and that I am capable of language, talking, breathing, eating like yourself). Consciousness is the only private urn..

In any case, even drawing beads from a single urn, draw as many as you like, the drawn beads’ colors are no indication whatsoever for the next bead’s. That’s the problem.

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar? All that is required is for us to not be aware of any prearrangement of the jars. Similarly we are not aware of any prearrangement of human attributes such that consciousness is the one aspect of ourselves that is not at all uniform.

[We should] play it safe in regards to animal suffering.

You seem to be advocating the precautionary principle, which, for the reasons Deutsch explains in BoI, is a bad idea.

He seems to make the argument that we should put caution to the side if it would inhibit knowledge growth (and therefore the infinite good that comes from it), since the bad that comes of it would only be finite. Do you agree? If so, then you must believe that protecting animals from potential suffering inhibits knowledge growth and that the potential suffering would only ever be finite?

#225 · Kieren (people may not be who they say they are) ·
Reply

Regarding self-referentiality, I wrote previously:

I think discussing [self-referentiality] further would take us down a mostly unrelated tangent […].

You continued anyway. Then, later on, in another comment, I wrote:

[A]gain, discussing [self-referentiality] further would take us down a mostly unrelated tangent.

That was the second time I recommended not discussing this matter further.

Now you’re continuing again:

Other than for self-replication, do you think self-referential modeling within the brain is a possible cause of consciousness?

Why do you ignore my warnings that discussing this issue would lead us down a mostly unrelated tangent?

Separately, you wrote:

How do you account for the fields of social sciences and medicine where we often make use of a theory because of its survival of past testing, not because we think it is true […]?

You wrote this as a quote but it’s not a quote. Presumably this happened because you quoted one of my lines and then didn’t put a blank line between the quote and your text. As you write your comments, check the markdown preview on the right before submitting them.

To answer your question: the theory may be really good (“hard to vary”, to use Deutsch’s terminology). It may be harder to vary than all the other theories we have guessed so far. So it’s not just that a theory has survived testing. I could imagine cases where you have two rival theories, one of which survived testing and one of which failed a test, and you still prefer the latter. Or your preferred theory may be the only one that has survived testing.

I understand that we know in physics that at least one of general relativity and quantum physics must be false, maybe both, because they contradict each other. That doesn’t stop us from using general relativity for, say, navigation, and it doesn’t stop us from using quantum theory to explain the outcomes of double-slit experiments. And note that so far I have written this and the previous paragraph without invoking corroboration. Granted, physics isn’t a social science or medicine, but why should it be different there?

Separately, you wrote:

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?

I ask you in turn: in the old example of the farm animals being fattened up every day and growing more and more confident that the farmer only has their well-being in mind, should they bet the day before the slaughter that the next day he will feed them again?

#239 · dennis (verified commenter) · in response to comment #225
Reply

Why do you ignore my warnings that discussing this issue would lead us down a mostly unrelated tangent?

Sorry, I guess I still saw it as relevant and so I continued on. The reason I find it relevant is because self-referentiality could be an example of an alternate cause of consciousness, which would refute your claim that creativity is the only remaining explanation. Why don’t you think it is relevant?

To answer your question: the theory may be really good (“hard to vary”, to use Deutsch’s terminology). It may be harder to vary than all the other theories we have guessed so far. So it’s not just that a theory has survived testing.

This answer doesn’t satisfy me because I’ve come to see the hard-to-vary principle as essentially an account of corroboration/induction. I have an outline of my argument for this here.
https://docs.google.com/document/d/1XL3yp1KfOLmnMSpUUA2GmiMLi7bxDg7Wj5E8j0MUZu0/edit?usp=sharing

I could imagine cases where you have two rival theories, one of which survived testing and one of which failed a test, and you still prefer the latter. Or your preferred theory may be the only one that has survived testing.

Right, but those would be cases where you know significantly more than just the past success of the theory right? Those are not the cases I am referring to.

I understand that we know in physics that at least one of general relativity and quantum physics must be false, maybe both, because they contradict each other. That doesn’t stop us from using general relativity for, say, navigation, and it doesn’t stop us from using quantum theory to explain the outcomes of double-slit experiments. And note that so far I have written this and the previous paragraph without invoking corroboration. Granted, physics isn’t a social science or medicine, but why should it be different there?

This example can work if we imagine things a little differently. If we did find that both general relativity and quantum physics were false (in some aspect), what argument would you provide for your continued use of these theories? (assuming you would still make use of them?). Do you not invoke corroboration then?

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?

I ask you in turn: in the old example of the farm animals being fattened up every day and growing more and more confident that the farmer only has their well-being in mind, should they bet the day before the slaughter that the next day he will feed them again?

I will answer your hypothetical, but please answer mine too.

The farm animals would be better off if they predicted the slaughter, but assuming they reason similar to us, and that they know nothing other than the fact they they get fed each morning by the farmer, then their rationality would lead them to bet that they would be fed again tomorrow, and 99.9% of the time they would be right. If the animals knew something about human history, farming practices, etc then things would be different.

#241 · Kieren (people may not be who they say they are) · · Referenced in comments #261, #329
Reply

The reason I find it relevant is because self-referentiality could be an example of an alternate cause of consciousness, which would refute your claim that creativity is the only remaining explanation.

No because, as I’ve explained, creativity seems to itself rely on self-referentiality by way of self-replicating ideas. In which case it’s not an alternate cause but part of the same cause.

Why don’t you think it is relevant?

Strikes me as largely if not entirely separate from the issue of corroboration.

To answer your question: the theory may be really good (“hard to vary”, to use Deutsch’s terminology). It may be harder to vary than all the other theories we have guessed so far. So it’s not just that a theory has survived testing.

This answer doesn’t satisfy me because I’ve come to see the hard-to-vary principle as essentially an account of corroboration/induction. I have an outline of my argument for this here.
https://docs.google.com/document/d/1XL3yp1KfOLmnMSpUUA2GmiMLi7bxDg7Wj5E8j0MUZu0/edit?usp=sharing

I start to read the first line, which says:

David Deutsch’s hard-to-vary (HTV) criteria [1] is offered […]

The verb is “is” so the subject must be singular. But the subject is “criteria”, which is plural. It’s one criterion. Foreign-language sounding words ending in -on are usually Greek and often end in -a when they’re plural. E.g. phenomenon -> phenomena, lexicon -> lexica (or lexicons, but even there the point is you’d never say ‘one lexicons’). I’m no expert on Greek, so see for yourself. Lots of people fuck it up and say “many phenomenon” or “one phenomena”. Or when speaking they pronounce the last syllable so quietly you can’t tell, to hide their ignorance. People get this wrong all the time but it’s such an easy thing to get right.

Then, in the footnote marked [1], the title to Deutsch’s book says “Beginning of Infinity”. That’s not the correct title. It’s The Beginning of Infinity.

So I’m only nine words in and have already found two blunders, which makes me question how much value the document can offer. I don’t wish to read on at this time.

Right, but those would be cases where you know significantly more than just the past success of the theory right?

Don’t we always? We always have theories about our theories, background knowledge, expectations…

If we did find that both general relativity and quantum physics were false (in some aspect), what argument would you provide for your continued use of these theories? […]

Instead of quantum physics, consider Newtonian physics, which also conflicts with general relativity and, as I understand it, is often used in engineering and experimental physics instead of general relativity, despite symmetry having been broken in favor of general relativity. Its continued use is not due to its having worked in the past (i.e., having survived many tests – on the contrary, I understand it has also failed many), but because the errors it introduces compared to general relativity in these contexts are negligible. We can know this from theory alone, without running any experiments. There may be other considerations such as Newton’s equations being easier than Einstein’s (I don’t know if that’s true, but it’s easy to imagine other cases involving other theories where it is).

Do you not invoke corroboration then?

As you can see in my previous paragraph: no. I instead invoked two other properties: negligible error introduction and ease of use.

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?

I may well.

#242 · dennis (verified commenter) · in response to comment #241 · Referenced in comments #329, #382
Reply

Adding to my previous comment. You wrote:

If the animals knew something about human history, farming practices, etc then things would be different.

Yes – and if they don’t already, they might conjecture something about that (assuming they can think like humans). If their conjecture is, as I’ve said, that their farmer only has their wellbeing in mind, then they are wrong every time, even if their prediction is correct some of the time. And if they wish to explain rather than just predict, that’s a problem. Especially if it results in death.

Humans’ situation isn’t all that different as sustained failure to explain the world around us also results in death.

#243 · dennis (verified commenter) · in response to comment #242
Reply

The reason I find it relevant is because self-referentiality could be an example of an alternate cause of consciousness, which would refute your claim that creativity is the only remaining explanation.

No because, as I’ve explained, creativity seems to itself rely on self-referentiality by way of self-replicating ideas. In which case it’s not an alternate cause but part of the same cause.

I think self-referentiality is more general than what is required for self-replicating ideas, but if you don’t want to go down that path then I will cease.

Why don’t you think it is relevant?

Strikes me as largely if not entirely separate from the issue of corroboration.

But it is relevant to the issue of animal consciousness.

So I’m only nine words in and have already found two blunders, which makes me question how much value the document can offer. I don’t wish to read on at this time.

No more blunders than my usual level :)
I don’t think I’ll articulate it much better here, so I won’t try.

Right, but those would be cases where you know significantly more than just the past success of the theory right?

Don’t we always? We always have theories about our theories, background knowledge, expectations…

Yes, but in the examples I was getting at (social sciences, medicine, etc), I was referring to cases where we don’t yet have background knowledge that lets us explain why a particular theory/treatment is good (e.g. no known mechanism of action). This doesn’t mean we are without any background knowledge. For example, the conclusions drawn from a clinical trial are based on the knowledge that the treatment was provided to a random sample of the population.

If we did find that both general relativity and quantum physics were false (in some aspect), what argument would you provide for your continued use of these theories? […]

Instead of quantum physics, consider Newtonian physics, which also conflicts with general relativity and, as I understand it, is often used in engineering and experimental physics instead of general relativity, despite symmetry having been broken in favor of general relativity. Its continued use is not due to its having worked in the past (i.e., having survived many tests – on the contrary, I understand it has also failed many), but because the errors it introduces compared to general relativity in these contexts are negligible. We can know this from theory alone, without running any experiments. There may be other considerations such as Newton’s equations being easier than Einstein’s (I don’t know if that’s true, but it’s easy to imagine other cases involving other theories where it is).

I agree that you could proceed here without corroboration because the use of Newtonian physics is justified because you know that it is an approximation to your current best theories. However, this scenario is too different from the hypothetical I posed. Could you please respond to it?

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?

I may well.

Would your betting have anything to do with the fact that the last 30 jars that you randomly selected contained red beads? Does it make it easier if it was 10 thousand jars?

If the animals knew something about human history, farming practices, etc then things would be different.

Yes – and if they don’t already, they might conjecture something about that (assuming they can think like humans). If their conjecture is, as I’ve said, that their farmer only has their wellbeing in mind, then they are wrong every time, even if their prediction is correct some of the time. And if they wish to explain rather than just predict, that’s a problem. Especially if it results in death.

Humans’ situation isn’t all that different as sustained failure to explain the world around us also results in death.

I agree.

#248 · Kieren (people may not be who they say they are) ·
Reply

I agree that you could proceed here without corroboration because the use of Newtonian physics is justified because you know that it is an approximation to your current best theories. However, this scenario is too different from the hypothetical I posed. Could you please respond to it?

OK your hypothetical was:

If we did find that both general relativity and quantum physics were false (in some aspect), what argument would you provide for your continued use of these theories? […]

E.g. general relativity is needed to keep GPS running and you’d want to keep that running while finding the successor theory to GR.

If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?

I may well.

Would your betting have anything to do with the fact that the last 30 jars that you randomly selected contained red beads? Does it make it easier if it was 10 thousand jars?

Psychologically, yes to both. People break symmetry this way all the time. That doesn’t change the fact that, epistemologically, induction doesn’t work, and that this way of breaking symmetry is invalid. It was either Popper or Hume who broke the problem of induction into the logical problem of induction on the one hand and the psychological one on the other.

#252 · dennis (verified commenter) · in response to comment #248 · Referenced in comments #329, #382, #490
Reply

E.g. general relativity is needed to keep GPS running and you’d want to keep that running while finding the successor theory to GR.

Do I have this right. You would continue using GR because you want the things it explains to keep working?

Psychologically, yes to both. People break symmetry this way all the time. That doesn’t change the fact that, epistemologically, induction doesn’t work, and that this way of breaking symmetry is invalid.

I agree that people often break symmetry this way, but do you? Given that you think it is invalid?

#260 · Kieren (people may not be who they say they are) · · Referenced in comment #329
Reply

Do I have this right. You would continue using GR because you want the things it explains to keep working?

I was referring not to the things it explains but the things that depend on it. If we were to reject GR in its entirety, we’d also have to reject things that use GR. Like GPS (from what I understand). But we wouldn’t throw GPS out the window if we learned GR is false (and GPS would keep working the same regardless).

I agree that people often break symmetry this way, but do you? Given that you think it is invalid?

As I’ve said, I may well.

A couple more thoughts on induction that I’ve had since my previous comment:

  1. Supporters of two conflicting theories may observe several pieces of evidence corroborating both theories. As a result, they might become more confident in their respective theory as each piece of evidence comes in. As always, they’d be wrong to mistake their feelings about the theory for a truth criterion (or probability criterion). They’d have to be, since the theories conflict.

  2. The other day, I was building an image upload for a website. Part of the feature was to display the images back to the user before he hit enter to confirm the upload. I noticed a bug: the images were sometimes displayed in a different order than the one in which the user picked them. That made it more difficult for the user to confirm his selection, so I set out to fix the bug. The nature of the bug was that I displayed the images in the order in which they were loaded, but larger images take longer to load, of course, so they’d be displayed later. I also noticed that the browser’s file API gives me the images in the order in which they were selected by default.

    I fixed the bug by rendering each image’s container immediately, in order, and then rendering each image within its respective container whenever it was done loading. Because the containers rendered in order, so did the images.

    Here’s the thing: when I tested whether my fix worked, I did not try to make repeating observations. I hoped for non-repeating observations so I could still reproduce the bug and thereby falsify my fix! And when I did not reproduce the bug only a few times in a row, I stopped testing because I already knew from the explanation of how and why the fix worked that I should never see the bug again. I did not keep testing the fix in hopes of getting more confident in it. (That really would have been rather pathetic on my part – like I’m hoping to feel good about my code or something.)

#261 · dennis (verified commenter) · in response to comment #260 · Referenced in comment #329
Reply

But we wouldn’t throw GPS out the window if we learned GR is false (and GPS would keep working the same regardless).

But why? What reason do you have for thinking that GPS would continue to work?

As I’ve said, I may well.

I’m confused which comment you are referring to here. Are you referring to breaking symmetry with the hard-to-vary principle? Because that would be a different principle.

#328 · Kieren (people may not be who they say they are) · · Referenced in comment #329
Reply

[Dennis:] But we wouldn’t throw GPS out the window if we learned GR is false (and GPS would keep working the same regardless).

[Kieren:] But why? What reason do you have for thinking that GPS would continue to work?

What reason do I have for not thinking that? If GPS has worked at all, it’s because some truth is encoded in its functionality. We don’t know what that truth is, but to think that GPS would suddenly stop working if our state of mind changed is some weird version of solipsism or telekinesis or something.

[Kieren:] I agree that people often break symmetry this way, but do you? Given that you think it is invalid?

[Dennis:] As I’ve said, I may well.

[Kieren:] I’m confused which comment you are referring to here. Are you referring to breaking symmetry with the hard-to-vary principle? Because that would be a different principle.

I had linked to the wrong comment (parent comment instead of the comment itself; both ids appear on the same line so I may change the UI around that). I meant to link to #242. There’s also #252. Bottom of each.

#329 · dennis (verified commenter) · in response to comment #328
Reply

Here’s an article containing the grammatical mistake I mentioned:

[T]he full current of the scripts are breaking bad […].

The subject of the sentence is “current”, which is singular, so the verb should be “is” instead of “are”. But “scripts” is closer to the verb than “current” so I guess the interviewee mistook “scripts” for the subject.

The same restructuring I mentioned before could correct the mistake while continuing to use the closest noun to determine the verb’s number: ‘The scripts’ full current is breaking bad’.

#355 · dennis (verified commenter) · in response to comment #186 · Referenced in post ‘Wrong-Number Pattern
Reply

What reason do I have for not thinking that? If GPS has worked at all, it’s because some truth is encoded in its functionality. We don’t know what that truth is, but to think that GPS would suddenly stop working if our state of mind changed is some weird version of solipsism or telekinesis or something.

Is this not a case of using a theory based on its pass success (past testing)?

I had linked to the wrong comment (parent comment instead of the comment itself; both ids appear on the same line so I may change the UI around that). I meant to link to #242. There’s also #252. Bottom of each.

Sorry, I’m still not sure how your referenced comments are answering my question. Could you please elaborate on your answer? I’ll copy my question below.

I agree that people often break symmetry this way, but do you? Given that you think it is invalid?

#382 · Kieren (people may not be who they say they are) ·
Reply

Here’s the thing: when I tested whether my fix worked, I did not try to make repeating observations. I hoped for non-repeating observations so I could still reproduce the bug and thereby falsify my fix! And when I did not reproduce the bug only a few times in a row, I stopped testing because I already knew from the explanation of how and why the fix worked that I should never see the bug again. I did not keep testing the fix in hopes of getting more confident in it. (That really would have been rather pathetic on my part – like I’m hoping to feel good about my code or something.)

I think you have provided a good example of how many software bugs are solved. However, some bugs cannot be solved so cleanly. Perhaps there is a complex race condition that only errors occasionaly, or the code is proprietry and not accessible for inspection. Comprimises are made to meet deadlines and a “seems to fix it, but not sure why” can be acceptable. The basis on whether to accept such a solution is often the result of repeated testing. If the bug doesn’t happen more than 1 in 1000, then that might be fit for purpose.

#383 · Kieren (people may not be who they say they are) ·
Reply

Is this not a case of using a theory based on its pass success (past testing)?

I don’t think so. Why would it be?

Sorry, I’m still not sure how your referenced comments are answering my question. Could you please elaborate on your answer?

Not sure what you’re looking for. You asked me if I break symmetry that way and I said “I may well”. As in: I’m fallible. I may use wrong ways to break symmetry sometimes, even if I make an effort not do.

Comprimises are made to meet deadlines and a “seems to fix it, but not sure why” can be acceptable. The basis on whether to accept such a solution is often the result of repeated testing. If the bug doesn’t happen more than 1 in 1000, then that might be fit for purpose.

I agree that people do that (hopefully only as a last resort). Of course, then they might find that they can reproduce the bug 1 in 1000 times only in dev, and that in prod it happens every time, or every other time, or whatever.

#388 · dennis (verified commenter) · in response to comment #383
Reply

I don’t think so. Why would it be?

Because your expecting GPS to work because it worked in the past.

Not sure what you’re looking for. You asked me if I break symmetry that way and I said “I may well”. As in: I’m fallible. I may use wrong ways to break symmetry sometimes, even if I make an effort not do.

That clarifies it. “I may well” is ambiguous as to why you would (maybe you have a valid reason). I think you could have just answered “no” since we don’t have to be so careful about restating that we are fallible every time we answer a question.

Circling back, this would mean that given 1000 random jars, you would not bet that the final jar contains a red bead, even though the previous 999 jars only contained red beads?

For you it would be just as rational to bet on a blue bead?

I would bet $10000 on red if give such an opportunity.

#390 · Kieren (people may not be who they say they are) ·
Reply

Because your [sic] expecting GPS to work because it worked in the past.

No. I’m expecting it to work because others have good explanations for why it works. Conversely, if those explanations stated that GPS only works for the hemisphere facing the sun at any given moment, I would expect it to work only intermittently. If our explanations said it will stop working in the year 2030 and why (maybe something changes about the universe that destroys it), I would expect it to stop working despite it having worked in the past.

In all those cases, our explanations tell us why GPS worked in the past and why and when it is or isn’t going to work in the future. In no case does the explanation say it’s going to work in the future because it has worked in the past.

I think you could have just answered “no” [to the question “If we actually did open 30 random jars and all the beads were red, would you not bet on red beads in the next jar?”] since we don’t have to be so careful about restating that we are fallible every time we answer a question.

So shouldn’t I have answered ‘yes’? Since “I may well” make the mistake of predicting the future from the past. (Note that it remains a mistake methodologically even if I happen to be right about the color of the bead.)

Circling back, this would mean that given 1000 random jars, you would not bet that the final jar contains a red bead, even though the previous 999 jars only contained red beads?

For you it would be just as rational to bet on a blue bead?

Depending on the explanation, yes. Knowing nothing else I probably would bet on the next jar containing only red beads (I think that’s what you mean when you say “the final jar contains a red bead”, emphasis added). But this strikes me as another case of distinguishing between the logical and the psychological. And, as always, it depends on what Popper calls background knowledge: what if I know the owner of the jars wants to fool me? What if I know there’s at least one blue bead in one of the jars and we haven’t found it yet? What if I’m at a casino and know a thing or two about how the odds are stacked against customers? What if I don’t? Etc.

#393 · dennis (verified commenter) · in response to comment #390
Reply

Kieren, I just stumbled upon a couple of passages in Popper’s Objective Knowledge (1983, Oxford Clarendon Press in Oxford).

On p. 67, he speaks of “the logical justification of the preference for one theory over another” (emphasis removed) and calls it “the only kind of ‘justification’ which I believe possible […]”.

He also says on p. 7 (emphasis removed):

[T]he assumption of the truth of test statements sometimes allows us to justify the claim that an explanatory universal theory is false.

Do these quotes help your case?

#403 · dennis (verified commenter) · in response to comment #390 · Referenced in post ‘Criticism of David Deutsch’s ‘Taking Children Seriously and fallibilism’
Reply

Because your [sic] expecting GPS to work because it worked in the past. No. I’m expecting it to work because others have good explanations for why it works

But remember in this hypothetical we have found that both GR and quantum physics are wrong. Therefore, we no longer have good explanations for why GPS is working right? How would you justify your continued use of GPS?

And, as always, it depends on what Popper calls background knowledge: what if I know the owner of the jars wants to fool me? What if I know there’s at least one blue bead in one of the jars and we haven’t found it yet? What if I’m at a casino and know a thing or two about how the odds are stacked against customers? What if I don’t? Etc.

Assume you don’t have any background knowledge like this which clearly influences your decision. You only have the 1000 jars which you are randomly opening. It sounds like you would bet that the last jar also contains red (even if only because of your psychology)?

Do these quotes help your case?

These quotes give a general statement of Popper’s views, but it’s his comments on corroboration that Salmon was reacting to (the stem of this discussion).

#470 · Kieren (people may not be who they say they are) ·
Reply

This discussion may be difficult if you take months to respond. Can you commit to responding within, say, a week?

By the way, the first quote in your last comment is a misquote of me. The first line (starting with “Because your”) should be a nested quote since you originally wrote that, not me.

#471 · dennis (verified commenter) · in response to comment #470
Reply

I’m interested in seeing this discussion to its conclusion, so I will make myself more responsive going forward.

#472 · Kieren (people may not be who they say they are) · · Referenced in comment #479
Reply

But remember in this hypothetical we have found that both GR and quantum physics are wrong. Therefore, we no longer have good explanations for why GPS is working right?

We’d adjust our explanations to why it only works in certain cases, or why GR (despite being wrong) still explains GPS but not certain other things. We’ve done this with Newtonian physics: we understand why it only works as an approximation and when it’s still acceptable to use. From BoI ch. 5:

Newton’s predictions are indeed excellent in the context of bridge-building, and only slightly inadequate when running the Global Positioning System, but they are hopelessly wrong when explaining a pulsar or a quasar – or the universe as a whole. To get all those right, one needs Einstein’s radically different explanations.

Sometimes we don’t know yet know why an explanation doesn’t work in some area, only that it doesn’t, until we find its successor – which, per Popper, will explain where and why its predecessor failed. But that’s for the negative cases. In a case where a theory does work, like Newtonian physics for bridge-building, yea, continue using it, I don’t see the problem. On the contrary, Newtonian physics may even have an advantage over relativity in legitimate applications, where, say, ease of use outweighs the fact that (I’m making this up) the 15th decimal place in the result is wrong, and you only need three decimal places anyway. Likewise, I’m not aware of anyone having found that GR does not work for GPS.

Back to your comment:

It sounds like you would bet that the last jar also contains red (even if only because of your psychology)?

Yes. As I have written: “[k]nowing nothing else I probably would bet on the next jar containing only red beads […]”.

These quotes give a general statement of Popper’s views, but it’s his comments on corroboration that Salmon was reacting to (the stem of this discussion).

We have also talked a bunch about justification, which Salmon invokes, too. Like when he writes “[w]hat I want to see is how corroboration could justify such a preference.” I had taken the position that justification is always impossible and never desirable – but Popper is more nuanced than that and makes room for some form of justification (while being careful about how he phrases it). (I think I’ve ‘inherited’ this mistake from Deutsch. FWIW, when Deutsch borrows ideas from Popper (and maybe others), there’s sometimes a reduction in quality, as I’ve written about here and here. I think fans of Deutsch should read those articles.) Since Popper accommodates justification a little bit, maybe I was wrong to reject it wholesale, and so maybe there’s some compatibility between Salmon and Popper.

#473 · dennis (verified commenter) · in response to comment #470
Reply

We’d adjust our explanations to why it only works in certain cases, or why GR (despite being wrong) still explains GPS but not certain other things. We’ve done this with Newtonian physics: we understand why it only works as an approximation and when it’s still acceptable to use. From BoI ch. 5:

Ok, but before we could adjust our theories there would be a period of time where all we have are theories that we know to be wrong. I would continue to use and rely on GPS during this period and I imagine you would too. Why would you continue to use GPS if not because of its past success?

This is close to the question of opening jars with coloured beads. We have no background knowledge to assist us, except for the known past success of our sample.

Sometimes we don’t know yet know why an explanation doesn’t work in some area, only that it doesn’t, until we find its successor – which, per Popper, will explain where and why its predecessor failed. But that’s for the negative cases. In a case where a theory does work, like Newtonian physics for bridge-building, yea, continue using it, I don’t see the problem. On the contrary, Newtonian physics may even have an advantage over relativity in legitimate applications, where, say, ease of use outweighs the fact that (I’m making this up) the 15th decimal place in the result is wrong, and you only need three decimal places anyway. Likewise, I’m not aware of anyone having found that GR does not work for GPS.

I mentioned this earlier on, but Newton’s gravity isn’t a good example because its approximate success can be explained in terms of GR. However, if you imagine a period between knowing that Newton’s gravity is incorrect and before GR was discovered, then you can ask a similar question to the one I asked above. Why continue using Newtonian physics during this period?

Yes. As I have written: “[k]nowing nothing else I probably would bet on the next jar containing only red beads […]”.

Ok, and I would do the same. I would bet a lot of money that the last jar contains red beads. If you can remember from earlier in our conversation, the contents of the last jar are kept private, as an analogy for consciousness. The contents of the other jars are public and observable by others, representing our behaviors, anatomy, etc. In the same way I would bet that the last jar contains red beads, I would bet that other animals have consciousness. This is a reason why I have the belief/expectation that animals are conscious, which conflicts with your restriction to consciousness requiring creativity.

Since Popper accommodates justification a little bit, maybe I was wrong to reject it wholesale, and so maybe there’s some compatibility between Salmon and Popper.

It irritates me a little when Popperians react strongly to seeing the word “justification”. Popper rightfully rejects justification as far as it means to prove something as infallibly true, but the word also has a more everyday meaning. When I say “justify your claim”, I don’t mean “Prove absolutely and without error that your claim is true”, I just mean “Provide reasons why I should think your claim is any good”. Here “reasons” can be those that a Popperian restricts themselves to using. This is the meaning of justification that I read in the Salmon paper.

#474 · Kieren (people may not be who they say they are) ·
Reply

Ok, but before we could adjust our theories there would be a period of time where all we have are theories that we know to be wrong.

As fallibilists, isn’t that already the case, ~all the time?

I would continue to use and rely on GPS during this period and I imagine you would too.

Yes.

Why would you continue to use GPS if not because of its past success?

I think it’d be more like: a conjecture that GPS and GR can still solve some of the problems I need solved. To put in your terms: there’s no ‘reason to believe’ that GPS or GR are wrong in their entirety – they’re wrong, but they contain truth. The true parts may still be useful.

Another reason to keep using GPS in such a scenario is tradition/dependency: lots of people rely on it and removing it would cause chaos, so you have no choice but to keep using it. (In short: dependency management, avoiding revolutions.) It’s a lot like in software development where introducing breaking changes should be done with care and ripping out entire pieces of software without replacement should generally be avoided. If my macOS is found to have a bug I generally (though there are some exceptions) will not (or simply cannot) stop using macOS. If possible, I’ll avoid the bug until it is fixed (ie a successor theory is found) or, if the bug is bad and pressing enough, I’ll try to switch, if only temporarily, to another OS that isn’t known to have this problem. In such cases, my thinking isn’t ‘my OS has worked in the past so it will work in the future’ – if, say, I’m not confident in Apple’s abilities, I may conclude that the OS won’t be fixed in the future – and my reason for continued use is my dependency on the OS and my theories around the nature of the bug (not wrecking the OS entirely, the OS still being safe to use overall, etc).

[I]f you imagine a period between knowing that Newton’s gravity is incorrect and before GR was discovered, then you can ask a similar question to the one I asked above. Why continue using Newtonian physics during this period?

You can extend my previous answer to this question. In short: Newtonian physics still contained truth, and people needed to keep building bridges.

By the way, I think historically there was such a period, but I’d have to look into it further.

In the same way I would bet that the last jar contains red beads, I would bet that other animals have consciousness. This is a reason why I have the belief/expectation that animals are conscious, which conflicts with your restriction to consciousness requiring creativity.

But for the beads we assumed no other (background) knowledge, whereas with animals we have lots of evidence even if the can’t see the figurative beads (ie look inside animals’ heads). If there were no such evidence nor any theoretical background so that the situation with judging animal minds really were analogous to the example with the beads, I might agree with you about animals being conscious.

It irritates me a little when Popperians react strongly to seeing the word “justification”. Popper rightfully rejects justification as far as it means to prove something as infallibly true, but the word also has a more everyday meaning. When I say “justify your claim”, I don’t mean “Prove absolutely and without error that your claim is true”, I just mean “Provide reasons why I should think your claim is any good”. Here “reasons” can be those that a Popperian restricts themselves to using.

That’s fair.

#475 · dennis (verified commenter) ·
Reply

Ok, but before we could adjust our theories there would be a period of time where all we have are theories that we know to be wrong.

As fallibilists, isn’t that already the case, ~all the time?

The distinction is between a theory that has survived all falsification attempts (tentatively true), and one which has not (known to be false). So right now the problem is deciding what to do when all you have are theories that are known to be false.

I think it’d be more like: a conjecture that GPS and GR can still solve some of the problems I need solved. To put in your terms: there’s no ‘reason to believe’ that GPS or GR are wrong in their entirety – they’re wrong, but they contain truth. The true parts may still be useful.

I don’t think this works. A conjecture that GPS will work tomorrow is arbitrary and easy to vary. I could just as easily conjecture that it will not work. From a Popperian perspective, If we still had a good, tentatively true theory explaining GPS then we could rule out one of these options, but in this hypothetical we no longer have this.

Another reason to keep using GPS in such a scenario is tradition/dependency: lots of people rely on it and removing it would cause chaos, so you have no choice but to keep using it. (In short: dependency management, avoiding revolutions.)

I agree that people might just continue using GPS out of habit. However if this were the only remaining reason for using it, then wouldn’t people quickly transition away from relying on it (especially for life critical application)? I think we can both agree that this wouldn’t happen, but why? Has it not got something to do with the past 50 years of successful use of GPS by billions of devices?

But for the beads we assumed no other (background) knowledge, whereas with animals we have lots of evidence even if the can’t see the figurative beads (ie look inside animals’ heads). If there were no such evidence nor any theoretical background so that the situation with judging animal minds really were analogous to the example with the beads, I might agree with you about animals being conscious.

What sort of evidence tells us that animals are not conscious? It cannot be evidence of their lack of creativity (since creativity == consciousness is what is under question).

#476 · Kieren (people may not be who they say they are) · · Referenced in comment #490
Reply

The distinction is between a theory that has survived all falsification attempts (tentatively true), and one which has not (known to be false). So right now the problem is deciding what to do when all you have are theories that are known to be false.

I agree. I guess serious fallibilists consider even their best guesses to be false, or eventually found to be false, always. But they might be going too far: sometimes we do speak the truth, if only accidentally. (But, of course, we can never know whether we have spoken the truth, as Xenophanes said, and we should remain critical.)

I don’t think this works. A conjecture that GPS will work tomorrow is arbitrary and easy to vary. I could just as easily conjecture that it will not work. From a Popperian perspective, If we still had a good, tentatively true theory explaining GPS then we could rule out one of these options, but in this hypothetical we no longer have this.

Let me try another approach: what we know of Popperian epistemology (which is quite difficult to vary) says that theories that have survived lots of criticism contain mistakes and truth. Your question was: “Why would you continue to use GPS if not because of its past success?” That’s one of the reasons why – that I know that even if it contains mistakes, it also contains truth.

I agree that people might just continue using GPS out of habit.

I don’t think it’s habit. What I’ve described is a hard requirement/dependance. In this light, regarding your followup question:

However if [habit] were the only remaining reason for using [GPS], then wouldn’t people quickly transition away from relying on it (especially for life critical application)? I think we can both agree that this wouldn’t happen, but why?

The reason they can’t is not habit but dependance and because coming up with new solutions is usually difficult. It takes skill, time, and also luck. They may quickly begin to work on alternatives, but it might take a while before they find a viable one. In the meantime, it seems to me they have no choice but to keep using GPS. Breaking with traditions is hard.

What sort of evidence tells us that animals are not conscious? It cannot be evidence of their lack of creativity (since creativity == consciousness is what is under question).

Following Deutsch, I think it’s more like: creativity leads to consciousness. As in: creativity bestows consciousness/consciousness is a side effect of creativity. I don’t think they’re the same.

For specific evidence, see (I may have linked to some of these before):

On the topic of animal sentience more generally, I recommend my ‘Animal-Sentience FAQ’.

#477 · dennis (verified commenter) · · Referenced in comment #482
Reply

Let me try another approach: what we know of Popperian epistemology (which is quite difficult to vary) says that theories that have survived lots of criticism contain mistakes and truth. Your question was: “Why would you continue to use GPS if not because of its past success?” That’s one of the reasons why – that I know that even if it contains mistakes, it also contains truth.

In this scenario, the only true parts of GPS that you are aware of are where it has been successful in the past. If this is a reason why you would continue to use GPS, then I don’t see how it is meaningfully different from relying on GPS because of its past success. This looks like corroboration or induction to me.

The reason they can’t is not habit but dependance and because coming up with new solutions is usually difficult. It takes skill, time, and also luck. They may quickly begin to work on alternatives, but it might take a while before they find a viable one. In the meantime, it seems to me they have no choice but to keep using GPS. Breaking with traditions is hard.

Ok, then consider applications of GPS where failures in the system could be catastrophic and worse than not running the system at all, e.g. precision timing of industrial control systems. How are you going to convince the operators to start running this system again, given that lives are at risk if something goes wrong?

I think I could convince them to operate the system based on its past success (massive sample size).

Following Deutsch, I think it’s more like: creativity leads to consciousness. As in: creativity bestows consciousness/consciousness is a side effect of creativity. I don’t think they’re the same.

Right, I understand.

The examples you give do not convince me. For example, the “Buggy Dogs” argument infers that they are not intelligent because of their behaviour. Fair enough. However, when you speak of “intelligence” you are actually referring to the definition of it in terms of creativity right? Therefore this is an invalid reason because you are referencing the very thing that is under question (creativity -> consciousness).

#478 · Kieren (people may not be who they say they are) · · Referenced in comment #482
Reply

Have you noticed that, when I offer refutations or counterexamples, you then keep tweaking the scenarios until I’m more or less forced to agree with you?

For example:

In this scenario, […].

and

Ok, then consider […].

and

However, if you imagine a period […].

and

Ok, but before we could […].

and

Assume you don’t have any background knowledge like this […].

It’s easy to find examples of you doing this, ie making adjustments to your original point so that my refutations or counterexamples don’t apply anymore. You were successful in doing this with the example of the beads because you tweaked it sufficiently.

Do you think that approach is conducive to you changing your mind if you’re wrong, or to “seeing this discussion to its conclusion”, as you wrote?

#479 · dennis (verified commenter) · in response to comment #478
Reply

It’s easy to find examples of you doing this, ie making adjustments to your original point so that my refutations or counterexamples don’t apply anymore.

Even if this is true, I don’t think that is necessarily bad, it can be part of the process of following an argument through. It would be bad If I kept derailing the argument, pulling it in completely different directions, etc.

Anyway, I had a look back at the examples you quoted and I am not concerned. For most of the cases I was simply highlighting a specific case which is consistent under the original scenario. At other times I was responding to your introduction of a new scenario (Newton being wrong), and I was adjusting it to fit closer to the original scenario (GR being wrong).

I had been pursuing the role of induction/corroboration in your epistemology and I think in regards to that I have been on track.

#481 · Kieren (people may not be who they say they are) · · Referenced in comment #482
Reply

I wrote:

[T]heories that have survived lots of criticism contain mistakes and truth.

(Which you misquoted, btw, by not italicizing the ‘and’. Those italics are important. Continuing with my quote:)

Your [Kieren’s] question was: “Why would you continue to use GPS if not because of its past success?” That’s one of the reasons why – that I know that even if it contains mistakes, it also contains truth.

Then you said:

In this scenario, the only true parts of GPS that you are aware of are where it has been successful in the past.

One can mistakenly think that it worked in some situations and also mistakenly think that it didn’t work in others. We’re fallible in our interpretation of test results, too. But in any case, I wouldn’t restrict my truth claims about the theory to only those applications of it that I have observed (and correctly think worked). A major ‘reason to believe’ – and I’m phrasing this in justificationist terms on purpose – that a theory is true, or closer to the truth, is that it solves previously unsolved problems. People can and do make such truth claims without ever testing a theory – so there can be no corroboration or (psychological) induction at play.

Regarding your adjusted GPS example about precision timing of industrial-control systems, you wrote:

I think I could convince them to operate the system based on its past success (massive sample size).

As I believe I’ve said before, there was a massive sample size of tests of Newton’s theories over the centuries, they were all successful, and yet Newton was wrong. Do I doubt that one could convince people based on past success? As I’ve said: no. But that’s a psychological question. Sometimes, just a few decisive negative test results undo thousands of corroborations.

[W]hen you speak of “intelligence” you are actually referring to the definition of it in terms of creativity right? Therefore this is an invalid reason because you are referencing the very thing that is under question (creativity -> consciousness).

To be clear, the Deutschian claim as I understand it is that some entity is conscious if and only if it is creative. (Though I have wondered whether it’s really: some entity is conscious if and only if it is critical. But I digress.)

Since it is an ‘if and only if’, we can deduce a lack of creativity from a lack of consciousness, and vice versa – can we not?

In #481, you wrote:

I had been pursuing the role of induction/corroboration in your epistemology and I think in regards to that I have been on track.

Are you interested in being right or in finding flaws in your thinking?

#482 · dennis (verified commenter) · in response to comment #478 · Referenced in comment #492
Reply

(Which you misquoted, btw, by not italicizing the ‘and’. Those italics are important. Continuing with my quote:)

Sorry, I’m losing formatting when I copy the text. I’ll try to do better.

that a theory is true, or closer to the truth, is that it solves previously unsolved problems. People can and do make such truth claims without ever testing a theory – so there can be no corroboration or (psychological) induction at play.

How can you determine that a theory solves the problem if you do not test it?

As I believe I’ve said before, there was a massive sample size of tests of Newton’s theories over the centuries, they were all successful, and yet Newton was wrong. Do I doubt that one could convince people based on past success? As I’ve said: no. But that’s a psychological question. Sometimes, just a few decisive negative test results undo thousands of corroborations.

What does calling it psychological change? Does it mean that the reasoning is bad and that its conclusions should not be relied on? Which would mean you think it is just as rational to expect a blue bead despite the past 999 jars containing red beads?

Since it is an ‘if and only if’, we can deduce a lack of creativity from a lack of consciousness, and vice versa – can we not?

Sure, but that is not where the trouble lies. I’ll try and lay out your argument as I see it.

1) Animals are uncreative.
2) Creativity is required for consciousness.
3) Therefore, animals are not conscious.

The problem is that premise 2 is what is under question. You linked me to your ‘Buggy Dogs’ post as evidence of premise 2, but I think it is only evidence for premise 1.

Are you interested in being right or in finding flaws in your thinking?

I am aware of the flaws in my epistemology which involves induction, but I am also aware of problems in Popperian epistemology (corroboration). If Popperian epistemology can give me a good answer to these problems then that would be a win for me, because I would then have a much more elegant epistemology to employ.

#488 · Kieren (people may not be who they say they are) · · Referenced in comment #492
Reply

How can you determine that a theory solves the problem if you do not test it?

You can know that from theory.

What does calling it psychological change?

I’m distinguishing between the epistemological and the psychological, as Popper did. That distinction matters because the two fields are often after different things. For example, I’ve quoted Popper here as saying:

Such remarks probably won’t satisfy those who are after a psychological theory of creative thinking […]. Because what they’re after is a theory of successful research and thinking.
I believe that the demand for a theory of successful thinking cannot be satisfied. And it is not the same as a theory of creative thinking. […]

Deutsch picked up the same difference in BoI – in ch. 9 he speaks of “matters not of philosophy but of psychology – more ‘spin’ than substance”. And in #252, I mentioned the difference between the logical and the psychological problems of induction.

Back to your comment:

Does it mean that the reasoning is bad and that its conclusions should not be relied on?

Not just because it’s psychological, but yes, inductive reasoning is bad.

Which would mean you think it is just as rational to expect a blue bead despite the past 999 jars containing red beads?

Depending on the underlying explanation, yes.

Say you have a bead-drawing algorithm (the kind of thing you might see in a virtual casino). Given that the algorithm works as follows…

(defn draw-bead []
  "red")

…the ‘inductive’ approach would happen to be spot on.

But given that it works as follows…

(defn draw-bead' []
  (if (zero? (rand-int 2))
    "red"
    "blue")) 

…the same approach would fail pretty soon – although you might find yourself very unlucky (or lucky, depending on how you look at it) and have it repeat the same color many times.

And given that it works as follows…

(defn draw-bead'' [beads-drawn]
  (if (< beads-drawn 1000)
    "red"
    "blue"))

…the ‘inductive’ approach would be spot on for the first 999 draws, and then it would suddenly fail when you’re more confident than ever before that it’s correct (like people were with Newtonian physics).

The Popperian approach says that making predictions is only part of reasoning, and it’s not the main part. Reasoning is mainly about explaining reality, which involves resolving contradictions between ideas. In the above examples, reality is the underlying algorithm – the source code. If it’s hidden from you, like reality, all you have is your knowledge of which beads you’ve drawn in the past, and even that you only have fallibly. But you don’t limit yourself to predicting which beads will be drawn in the future. You look for cases where your predictions do not come true so you can improve your idea of what the algorithm looks like, i.e., resolve contradictions between what you think the source code is and its return values on the one hand, and the real source code and its so-far observed return values on the other. While we typically make predictions that are in line with past observations, doing so shouldn’t be mistaken for induction.

Re the last example, draw-bead'', you might ask: ‘If we’ve only drawn 500 beads, what earthly reason would we have to suspect that the code flips after 999?’ As in: we would continue to think that draw-bead is the correct solution. We wouldn’t conjecture that the algorithm contains that conditional (if (< beads-drawn 1000) ...) – after all, our predictions have always come true so far, so we’ve had no reason to adjust our model of the algorithm to include that conditional. In other words: we wouldn’t be justified in introducing the conditional; we should only change our code when a prediction fails. And I would agree. But if somebody made that change, even without justification, they’d happen to be right! Not only would they be vindicated after 500 more draws, but they’d have discovered the true source code without any justification. So how can justification possibly matter?

The problem is that premise 2 is what is under question. You linked me to your ‘Buggy Dogs’ post as evidence of premise 2, […].

No, I think you may have misread me. In #476, you asked, “[w]hat sort of evidence tells us that animals are not conscious?” I responded with a link to my ‘Buggy Dogs’ post. To be sure, there was an aside of mine in between on how consciousness isn’t the same as creativity but follows from it, but when I wrote “For specific evidence […]”, I was referring specifically to your question.

I consider premise 2 – “Creativity is required for consciousness.” – to be uncontroversial for the moment. But if you have arguments why that premise cannot be true, ie refutations, I want to know.

I am aware of the flaws in my epistemology which involves induction, but I am also aware of problems in Popperian epistemology (corroboration).

As I’ve said in #109 re corroboration:

Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. But I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.

That leaves the problem of induction. You also wrote:

If Popperian epistemology can give me a good answer to these problems then that would be a win for me, because I would then have a much more elegant epistemology to employ.

Popper has addressed induction thoroughly. Have you read chapter 1 of his book Objective Knowledge, titled ‘Conjectural Knowledge: My Solution of the Problem of Induction’?

#490 · dennis (verified commenter) · in response to comment #488 · Referenced in comment #492
Reply

You can know that from theory.

I could conjecture all sorts of fantastical/crazy theories to solve any problem I want. These theories would meet the criteria you provided - solving previously unsolved problems - but we wouldn’t expect them to contain truth because of this right?

Not just because it’s psychological, but yes, inductive reasoning is bad.

Maybe we can find common ground here. If what you’re calling psychological here is our sort of instinctive response to finding evidence in favor of a theory (failed falsification attempts), which drives our expectations/beliefs of the future success of the theory, then I’m happy to concede that the induction I’m referring to is this same thing.

…the ‘inductive’ approach would be spot on for the first 999 draws, and then it would suddenly fail when you’re more confident than ever before that it’s correct (like people were with Newtonian physics).

I should probably explain a little more about where induction fits into my epistemology. I follow the writings of Charles Sanders Peirce, where scientific reasoning involves a cycle of abduction, deduction, and induction. The purpose of each is roughly as follows.

Abduction: Hypothesis, conjecture.

Deduction: Deducing results/facts/consequences that follow from the theory.

Induction: Inferring the fallible truth of the theory (As far as it gives us confidence in its predictions) based on the proportion of deduced consequences of the theory that were found to be true. This includes looking back and examining past consequences of the theory too. The more random the sampling, the better the induction.

Error is expected to occur. The whole process is fallible, but self correcting. Induction is not intended to always provide good results, but by repeating the process of abduction, deduction, and induction the theory can be corrected and in the long run we move closer to the truth.

So if after 999 draws we make a bad prediction, it would not be damning on the whole process, which would eventually correct itself. The source code of the universe might be full of these little random one-off changes and tricks. After being tricked/misled numerous times, someone might hypothesize that the universe is such that it will mislead and trick us, and an induction would give them confidence in this.

Not only would they be vindicated after 500 more draws, but they’d have discovered the true source code without any justification. So how can justification possibly matter?

The justification matters because it gives us confidence/belief in the theory, it gives us a tool to convince/reason others into believing the theory too.

As per your last source code example, we have the new conjecture (new because we are replacing our last conjecture) that all of the beads will be blue after the 999th red bead. Consider your confidence in this theory’s predictions at two different points in time.
1) 1 blue bead has been observed after 999 red beads.
2) 10,000 blue beads have been observed after the 999 red beads.

Imagine trying to convince someone that the next bead will be blue given the evidence at hand in scenarios 1 versus scenario 2. In practice, the effects of justification cannot be ignored.

I consider premise 2 – “Creativity is required for consciousness.” – to be uncontroversial for the moment. But if you have arguments why that premise cannot be true, ie refutations, I want to know.

As I understand it you think it is the only remaining theory of consciousness that has survived criticism. I disagree. I can think up other theories which I see as less controversial e.g certain information processing in the brain causes consciousness. I prefer such an alternative because it doesn’t conflict with my background expectations that animals have consciousness.

You might suggest that the non-creative (algorithmic) aspects of our brain’s processing are without consciousness, but you will need to provide a good argument in favor of that claim before I can accept it. The reasoning you provide in your Animal-Sentience FAQ appears circular to me. Perhaps you can lay out your favorite argument here in a more syllogistic form?

Popper has addressed induction thoroughly. Have you read chapter 1 of his book Objective Knowledge, titled ‘Conjectural Knowledge: My Solution of the Problem of Induction’?

I have read this. He correctly points out the shortcomings of various forms of induction. He also attempts to solve the pragmatic problem of preference with his conception of Corroboration. Did you want to discuss anything in particular?

#491 · Kieren (people may not be who they say they are) ·
Reply

I could conjecture all sorts of fantastical/crazy theories to solve any problem I want. These theories would meet the criteria you provided - solving previously unsolved problems - but we wouldn’t expect them to contain truth because of this right?

Right, because they don’t meet other criteria (such as not being “fantastical/crazy”). We have all kinds of criteria good theories must meet. DD wrote about this in BoI.

Re induction, I have pointed out that people use ‘induction’ psychologically. I do not disagree that past successes can be used to convince people to adopt a theory. That doesn’t refer to induction as a process that can create knowledge.

If you’re going to hold on to induction – Peirce’s or someone else’s – you better come up with a refutation of Hume’s and Popper’s work on it. I’m not interested in refuting induction for you, nor in making it work.

Regarding “[t]he source code of the universe”, when I wrote “[i]n the above examples, reality is the underlying algorithm – the source code”, I was debating whether I should clarify that I do NOT mean that reality is made up of source code. Looks like I was wrong not to. So, to be clear: I was merely using source code as a stand-in for reality.

The justification matters because it gives us confidence/belief in the theory, it gives us a tool to convince/reason others into believing the theory too.

Not in the scenario I’ve described, where you’d have no ‘reason to believe’ in your theory whatsoever, nor would anyone else, yet you’d be 100% correct. In addition, I quote BoI ch. 10 once more:

So the thing [justificationists] call ‘knowledge’, namely justified belief, is a chimera. It is unattainable to humans except in the form of self-deception; it is unnecessary for any good purpose; and it is undesired by the wisest among mortals.

You wrote:

You might suggest that the non-creative (algorithmic) aspects of our brain’s processing are without consciousness, but you will need to provide a good argument in favor of that claim before I can accept it.

I think your request for “a good argument in favor” is indicative of a larger problem in this discussion. You seek supportive arguments, whereas I seek refutations, and I also don’t consider a ‘supportive argument’ a success or as causing any sort of increase in a theory’s epistemic status. Your methodology is justificationist in nature, mine is Popperian/‘refutationist’. The reason you should accept the claim is that you cannot find a refutation of it (if indeed you cannot find one), not that I haven’t given enough arguments in favor of it.

This difference in our respective approaches may lead to an impasse in this discussion. That doesn’t mean we can’t learn from each other, but I follow Elliot in thinking that if you’re going to have a fruitful discussion, you better make decisive, yes/no arguments. I’d love for you to offer me a brutal refutation of the claim that animals are not sentient. Conversely, I’m not interested in providing “a good argument in favor” of my claims re animal sentience – not only do I doubt that any such argument will ever convince you because there could always be more justifications, but I also don’t ask for such an argument in favor of the claim that animals are sentient after all.

Your first attempt at refutation was this:

Routinely I find evolved aspects of my biological self are also present in other animals.
Consciousness is an evolved aspect of myself.
Therefore, consciousness has a fair chance of being present in other animals.

Notably, this isn’t a deductive syllogism of the kind you requested from me. It’s inductive. But in any case, that is how we then got to the example with the beads, and this first attempt doesn’t work, IMO, for the reasons I’ve explained re induction. But you can convince me that I’m wrong by refuting Hume’s and Popper’s work on induction – not by giving arguments in support of your view, but by refuting theirs.

I believe your only other attempt at refutation has been the claim that my argument is circular. I don’t see it. But here’s the syllogism you requested:

  1. Creativity is necessary and sufficient for consciousness/sentience to arise.
  2. Animals are not creative.
  3. Therefore, animals are not sentient.

You can arrive at this syllogism by taking yours from #488 and reversing 1) and 2). (The major premise should come before the minor premise.)

As I hinted in #482, the syllogism may instead be:

  1. An ability to be critical is necessary and sufficient for consciousness/sentience to arise.
  2. Animals do not have this ability.
  3. Therefore, animals are not sentient.

(Given the links between creativity and criticism, we may eventually find these two syllogisms to be the same, but the difference in focus may be important in understanding animals and consciousness.)

Please explain how these syllogisms are circular? My current guess is that you’re looking for a justification for 1), you think 3) would constitute such a justification, and so you misinterpret the syllogism as being circular.

You also wrote:

I have read [chapter 1 of Popper’s Objective Knowledge]. He correctly points out the shortcomings of various forms of induction. He also attempts to solve the pragmatic problem of preference with his conception of Corroboration. Did you want to discuss anything in particular?

No. You had requested “a good answer to these problems” so you may “have a much more elegant epistemology to employ”. The Popper reference was an attempt to help with that.

#492 · dennis (verified commenter) ·
Reply

I started responding to your comments regarding induction/epistemology, but I’ve decided to leave them out of this post to avoid confusion. See below.

This difference in our respective approaches may lead to an impasse in this discussion. That doesn’t mean we can’t learn from each other, but I follow Elliot in thinking that if you’re going to have a fruitful discussion, you better make decisive, yes/no arguments. I’d love for you to offer me a brutal refutation of the claim that animals are not sentient.

Fair enough. Whilst I am interested in discussing our epistemological differences, I am more interested in providing you with a refutation of your claims about animal consciousness (I assume you are more interested in this too). I think this can be done without us clashing on epistemology, and going forward I will focus on doing just that. If we find that this is clearly not possible without first resolving our epistemological differences then that can be our conclusion. Maybe we would then agree to pick up where we left off and focus on epistemology alone.


You have provided the following syllogism.

1) Creativity is necessary and sufficient for consciousness/sentience to arise.
2) Animals are not creative.
3) Therefore, animals are not sentient.

This argument is fine, however it is not the argument I was looking for. I was actually looking for an argument where premise 1 of above would be the conclusion. The claim “Creativity is necessary and sufficient for consciousness/sentience to arise” is what I take issue with (I don’t see why it couldn’t be otherwise). If you don’t want to try and provide an argument in favor of this claim because it would be justification seeking, then maybe you can instead refute the counterclaim: creativity is not necessary for consciousness. If you can provide me with your preferred argument in syllogistic form, either in favor of your claim or refuting the counterclaim, then that would give me a chance to demonstrate the circularity I see in your reasoning. If I cannot demonstrate circularity then you would have convinced me that my refutation is no good.

#531 · Kieren (people may not be who they say they are) ·
Reply

Maybe I should clarify what I mean by an argument in syllogistic form. It doesn’t have to be strictly 2 premises and a conclusion. It can be a chain of premises and conclusions. The main thing I’m looking for is the logic of the argument to be laid out clearly.

#532 · Kieren (people may not be who they say they are) ·
Reply

I think this can be done without us clashing on epistemology, and going forward I will focus on doing just that. If we find that this is clearly not possible without first resolving our epistemological differences then that can be our conclusion. Maybe we would then agree to pick up where we left off and focus on epistemology alone.

OK.

You have provided the following syllogism.

1) Creativity is necessary and sufficient for consciousness/sentience to arise.
2) Animals are not creative.
3) Therefore, animals are not sentient.

You’ve misquoted me again; as a result, the formatting is off. You can see an explanation here (that site is under development and the link may break). You can use that site to check quotes before submission (expect bugs). Or you can paste your quote into the browser’s word search and, if you only get one match (the one in the textarea), it must be a misquote (that won’t work in this instance because of the enumeration but it’s a decent quick-glance approach in general).

This argument is fine, however it is not the argument I was looking for. I was actually looking for an argument where premise 1 of above would be the conclusion.

I suspect that an explanatory theory of consciousness will provide such an argument. I’m afraid I do not have one yet, but you seem to imply that my claim’s epistemic status will increase if it’s a conclusion rather than a standalone conjecture.

That cannot be true because we’d always need infinitely many new theories to accept just one new one. Imagine if Einstein had proposed GR and then people had said ‘but what does it follow from?’ We still don’t know. Coming up with the next theory (from which GR follows, if only as an approximation) is another creative act. And if we do find that next theory, people can then always say ‘well but what does that theory follow from?’.

This approach exhibits the infinite regress of justificationism, so I’m skeptical as to whether you can “provid[e] [me] with a refutation of [my] claims about animal consciousness […] without us clashing on epistemology […]”.

All that being sad, I am still interested in your plan of demonstrating circularity, and this path…

If you don’t want to try [or can’t, for the moment] provide an argument [ie syllogism] in favor of this claim […], then maybe you can instead refute the counterclaim: creativity is not necessary for consciousness.

…is still open. (You can see here that my quote is accurate.) I think your request can be rephrased in terms of breaking symmetry between the claims ‘creativity is necessary for consciousness’ and ‘creativity is not necessary for consciousness’. I can then meet your request for my “preferred argument in syllogistic form” by breaking symmetry as follows:

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. I rule out (b) because I have seen no evidence of creativity in animals, which I should be seeing if they were creative, and I have seen lots of evidence of animals making mistakes which, were they conscious, they would correct (eg this cat in #107)), as well as evidence of their algorithmic, ie non-creative nature (see #124, among others).
  3. That leaves only (a).

Thus there should be a way for you “to demonstrate the circularity [you] see in [my] reasoning.”

#533 · dennis (verified commenter) · in response to comment #531
Reply

Happy new year!

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. I rule out (b) because I have seen no evidence of creativity in animals, which I should be seeing if they were creative, and I have seen lots of evidence of animals making mistakes which, were they conscious, they would correct (eg this cat in #107)), as well as evidence of their algorithmic, ie non-creative nature (see #124, among others).
  3. That leaves only (a).

Thanks for laying this argument out for me. It looks like premise 2 is doing most of the work here. I would like to expand on one of the claims you made in premise 2: “I have seen lots of evidence of animals making mistakes which, were they conscious, they would correct”. I have expanded this claim out as follows.

  1. If animals are conscious, then they would correct obvious mistakes in their behavior.
  2. Animals have been observed as failing to correct obvious mistakes in their behavior (e.g. the cat failing to drink water from a tap).
  3. Therefore, animals are not conscious.

My next question would be how you establish premise 1 without assuming that creativity is necessary for consciousness?

#535 · Kieren (people may not be who they say they are) ·
Reply

Happy new year!

Same to you.

It looks like premise 2 is doing most of the work here.

Yes.

  1. If animals are conscious, then they would correct obvious mistakes in their behavior.
  2. Animals have been observed as failing to correct obvious mistakes in their behavior (e.g. the cat failing to drink water from a tap).
  3. Therefore, animals are not conscious.

[How do] you establish premise 1 without assuming that creativity is necessary for consciousness?

I see the problem. If premise 1 itself depends on creativity being necessary for consciousness, then that means I (unwittingly) snuck that assumption into my original premise 2, when it was the conclusion I wanted to arrive at. Circular reasoning.

Thanks for pointing this out. Time for me to go back to the drawing board.

#537 · dennis (verified commenter) · in response to comment #535
Reply

I see the problem. If premise 1 itself depends on creativity being necessary for consciousness, then that means I (unwittingly) snuck that assumption into my original premise 2, when it was the conclusion I wanted to arrive at. Circular reasoning.

Spot on. That is how I see it.

Thanks for pointing this out. Time for me to go back to the drawing board.

Back to the drawing board for this particular argument, or for your view on animal intelligence in general?

#539 · Kieren (people may not be who they say they are) ·
Reply

This particular argument first, then potentially my view on animal intelligence in general.

#540 · dennis (verified commenter) · in response to comment #539
Reply

I dismiss my previous syllogism and instead refer back to the DD quote I gave in the main article from BoI ch. 7:

My guess is that every AI is a person: a general-purpose explainer. It is conceivable that there are other levels of universality between AI and ‘universal explainer/constructor’, and perhaps separate levels for those associated attributes like consciousness. But those attributes all seem to have arrived in one jump to universality in humans, and, although we have little explanation of any of them, I know of no plausible argument that they are at different levels or can be achieved independently of each other. So I tentatively assume that they cannot.

To put this in syllogistic form:

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. There is “no plausible argument” that consciousness can be achieved independently from creativity, and it seems that they both “arrived in one jump to universality in humans” (link added).
  3. That leaves only (a).

Building on this syllogism, we can address animals separately (I think one of the weaknesses of my circular syllogism, and potentially the reason for its circularity, was that it did too much at once):

  1. There are only two options: either some computer is conscious (a) or it is not (b).
  2. Evidence of some behavior or idea that must have been created by the computer itself (as opposed to, say, merely having been inherited via genes or copied via rote imitation/memes) would be evidence of creativity and, therefore, consciousness.
  3. I know of no such evidence for (non-human) animal computers. That leaves only (b).
#542 · dennis (verified commenter) · in response to comment #540
Reply

  1. There are only two options: either creativity is necessary for consciousness (a) or it is not (b).
  2. There is “no plausible argument” that consciousness can be achieved independently from creativity, and it seems that they both “arrived in one jump to universality in humans” (link added).
  3. That leaves only (a).

My focus is on premise 2. To disprove it I would have to provide an alternative account of consciousness that you haven’t yet refuted or shown to be implausible. I think one such account is that the execution of certain inborn algorithms by certain means (e.g. by an animal brain) gives rise to conscious experience.

In your FAQ you attempt to refute such a view.

”In both cases, the knowledge was ‘inherited’ from an outside source. Both the computers and animals are not the creators of the knowledge they contain. But they’d need to be the creators to be conscious (see The Beginning of Infinity ch. 7). All they’re concerned with is, again, the mindless execution of algorithms they already contain.”

However, in your refutation you are seemingly referring back to premise 2 itself (The argument from DD). If this is the case then there is still circularity.

#547 · Kieren (people may not be who they say they are) · in response to comment #542
Reply

[I]n your refutation you are seemingly referring back to premise 2 itself […]. If this is the case then there is still circularity.

Maybe I’m missing something, but I think it’s merely a repetition. In other words, if I propose a claim a, and you propose a conflicting claim b, and I then say ‘no, I still think a’, that isn’t circular. Granted, it may be repetitive, but I think it would only be circular if I said, directly or indirectly, ‘a because a’.

In any case, I would use a different refutation. The claim that “the execution of certain inborn algorithms by certain means (e.g. by an animal brain) gives rise to conscious experience” seems to imply that there is something special about wetware such as animal brains. As DD and others have pointed out before me, that cannot be true since it’s in violation of computational universality: there’s nothing a computer made of metal and silicon couldn’t do that one made of wetware could (and vice versa). Our computers are universal simulators (within memory and processing-power constraints).

This refutation refers to neither previously stated syllogism, and instead to a different concept altogether (computational universality), so I don’t see any circularity here.

#550 · dennis (verified commenter) · in response to comment #547
Reply

In any case, I would use a different refutation. The claim that “the execution of certain inborn algorithms by certain means (e.g. by an animal brain) gives rise to conscious experience” seems to imply that there is something special about wetware such as animal brains. As DD and others have pointed out before me, that cannot be true since it’s in violation of computational universality: there’s nothing a computer made of metal and silicon couldn’t do that one made of wetware could (and vice versa). Our computers are universal simulators (within memory and processing-power constraints).

I might have confused things by using the word “inborn”. The cause of consciousness in my alternate theory is not dependent on any particular hardware. The important part is the execution of certain algorithms (possibly including but not limited to creative thought), I don’t specify which exact algorithms, but I conjecture that they are natural to animal brains. E.g. I think it plausible that the brain’s modeling of an external world is what gives rise to our concious inner world. This modeling could be algorithmic and without creativity.

#578 · Kieren (people may not be who they say they are) ·
Reply

[…] I think it plausible that the brain’s modeling of an external world is what gives rise to our concious [sic] inner world.

That’s a common claim, let’s look into it. Roombas also model the external world, as do many NPCs in video games. Are they conscious?

#580 · dennis (verified commenter) · in response to comment #578
Reply

I know that I am conscious, and through reasoning based on similarity I know that other humans are very likely conscious. Based on the evidence we have it seems that consciousness is produced by various complex processes in the brain. I find it plausible that the brains modelling of the external world could be an important part of of this. If a Roomba were to process and model the world in a similar way to how humans brains model the world then I would think there is a small chance that it has some kind of consciousness. The closer the Roomba’s processing is to that of a human brain, the more strongly I would believe that it is conscious.

#581 · Kieren (people may not be who they say they are) · in response to comment #580 · Referenced in comments #584, #585
Reply

Your answer is littered with inductivism and the strength of your beliefs. I wasn’t asking how likely your theories are, how strongly you believe in them, or anything else about your psychology. I was asking whether, in objective reality, roombas and video-game NPCs are conscious. They either are or they aren’t.

If you looked at the source code of a video-game NPC, would your explanation of how the code works refer to consciousness?

#582 · dennis (verified commenter) · in response to comment #581
Reply

If (1) the NPC is performing a similar kind of modelling as the human brain, and (2) it is this kind of modelling which produces consciousness, then the NPC would be conscious.

I don’t know if this is actually the case. I’m only claiming that it is plausible that some kind of algorithmic processing (such as modelling the external world) produces consciousness.

#583 · Kieren (people may not be who they say they are) · in response to comment #582
Reply

I think the answer to my question is ‘no, the explanation of the source code for NPCs and Roombas does not refer to consciousness’. Note also that people have been able to program such NPCs and Roombas without first having to know how consciousness works. It’s possible programmers accidentally made them conscious, but that would lead to unintended behavior in the NPCs. Programmers would seek to understand and probably get rid of this behavior as they demand absolute obedience. Also, usually, explanations come before major discoveries.

If (1) the NPC is performing a similar kind of modelling as the human brain, and (2) it is this kind of modelling which produces consciousness, then the NPC would be conscious.

Doesn’t that just amount to saying: ‘There’s some algorithm in the brain that makes it conscious, and if an NPC runs the same algorithm, it’s also conscious’?

I find that easy to agree with, but you haven’t explained why that algorithm should involve modeling the external world. In #581, you wrote you “find it plausible that the brains [sic] modelling of the external world could be an important part of of [sic] this.” But why?

Better yet, see if you can explain why whatever algorithm produces consciousness must have to do with modeling the external world, ie cannot be anything else. Without using ‘induction’. That would be convincing.

#584 · dennis (verified commenter) · in response to comment #583
Reply

It’s possible programmers accidentally made them conscious, but that would lead to unintended behavior in the NPCs.

Sure, unless conscious experience is just along for the ride, a byproduct of the information processing, or perhaps the impact of the consciousness is so minimal that it goes unnoticed.

I find that easy to agree with, but you haven’t explained why that algorithm should involve modeling the external world. In #581, you wrote you “find it plausible that the brains [sic] modelling of the external world could be an important part of of [sic] this.” But why?

Modelling of the external world is just one example of the kind of algorithmic processing that happens within brains that I find plausible as the cause for conciousness. It could be solely or only partially responsible for conjuring up a conciousness. The reason I think modelling of the external world is important for consciousness is because the things most vividly present in my awareness are the sorts of things that I imagine my brain is keeping a mental model of (objects, thoughts, and feelings). Another type of algorithmic processing that I think is a plausible cause of consciousness is the process of integrating a number of other brain processes together. This seems plausible since it is supported by studied neural correlations.

To be clear. My theory isn’t saying that a particular algorithmic brain process is the cause of consciousness, just that some number of these algorithmic processes (maybe one) are causing consciousness.

#585 · Kieren (people may not be who they say they are) ·
Reply

[C]onscious experience may just be along for the ride, a byproduct of the information processing […].

That’s basically been Deutsch’s and my claim all along – where you and I seem to disagree is whether all information processing results in consciousness or just some (and, in the latter case, which kinds). You had previously argued that all kinds might – now you’re saying maybe only one does. Which is it?

[P]erhaps the impact of the consciousness is so minimal that it goes unnoticed.

Surely not. Your consciousness has causal power, does it not? It’s at least causing you to write comments on this blog.

The reason I think modelling of the external world is important for consciousness is because the things most vividly present in my awareness are the sorts of things that I imagine my brain is keeping a mental model of (objects, thoughts, and feelings).

You just switched from “modelling of the external world” to the much more general “mental model”. Thoughts and feelings aren’t part of a model of the world around you. Also, consider whether a human brain in a vat would still be conscious. It couldn’t do any modeling of the external world, but I think it would still be conscious. Don’t you?

Another type of algorithmic processing that I think is a plausible cause of consciousness is the process of integrating a number of other brain processes together. This seems plausible since it is supported by studied neural correlations.

I forget who said this and the exact wording, but at most such correlations could corroborate the view that psychophysical parallelism is indeed very parallel. More generally – and we’re getting back to core epistemological disagreements here – the Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status ‘true’ to the theory remains unchanged.

I think neuroscience is generally a bad approach to the question of how consciousness works because neuroscience operates on the wrong level of emergence. The level is too low. You wouldn’t study computer hardware to understand how a word processor works. We need explanations on the appropriate level of emergence. I doubt colorful pictures of the brain can help us here; I’d disregard the brain and focus on the mind. Consciousness is an epistemological subject, not a neuroscientific one. Neuroscience has also led to such nonsense as this and this. It surely has value when it comes to understanding the brain’s hardware, including medical use cases, but when it comes to the mind I think it’s severely limited.

My theory [is] just that some number of […] algorithmic processes (maybe one) are causing consciousness.

Translation: something in the brain causes consciousness. Clearly. How does that tell us anything new?

#586 · dennis (verified commenter) · in response to comment #585
Reply

You had previously argued that all kinds might – now you’re saying maybe only one does. Which is it?

It might be that all kinds of information processing results in conscious experience. I have reasons against the idea, but I can’t rule it out. Can you?

Surely not. Your consciousness has causal power, does it not? It’s at least causing you to write comments on this blog.

Again, it may or may not. I have reasons in favor of the idea, but I can’t rule either option out yet.

You just switched from “modelling of the external world” to the much more general “mental model”. Thoughts and feelings aren’t part of a model of the world around you.

Fair point. The theory should then be that modeling of both the external world and our internal world causes consciousness.

Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status ‘true’ to the theory remains unchanged.

The Popperian view also says that we should prefer theories with higher corroboration.

I think neuroscience is generally a bad approach to the question of how consciousness works because neuroscience operates on the wrong level of emergence. The level is too low. You wouldn’t study computer hardware to understand how a word processor works.

Assume we don’t already understand how computers work, and that our starting point was the software. I think we would study the hardware to understand how the software came to be and to understand where else we might find it.

Translation: something in the brain causes consciousness. Clearly. How does that tell us anything new?

I’m saying a subset of algorithmic processes in the brain (whatever they may be) cause consciousness, as opposed to creative processes in the brain (whatever they may be). I don’t see how the former is ruled out as improbable.

#587 · Kieren (people may not be who they say they are) · in response to comment #586
Reply

It might be that all kinds of information processing results [sic] in conscious experience. I have reasons against the idea, but I can’t rule it out. Can you?

Yes. As I said at the beginning, our best explanations of how calculators work don’t refer to consciousness. So whatever information processing they do does not, to our current best understanding, result in consciousness.

This is, again, an application of DD’s criterion of reality. You don’t have a refutation of it, yet you don’t want to apply it, which then leads to situations where you “can’t rule either option out”.

We could be wrong, of course. And one day we may realize that. But until then, we have to take our best existing explanations seriously. I think the underlying issue is that you don’t think something is knowledge unless it is certain.

The theory should then be that modeling of both the external world and our internal world causes consciousness.

Having an “internal world”, including thoughts and particularly feelings, arguably presupposes consciousness. In which case your argument sounds circular.

Popperian view is that corroboration should not increase your credence in a theory. It just means that your tentative assignment of the truth status ‘true’ to the theory remains unchanged.

Why is it so hard for you to quote me properly? The first sentence makes it sound like I forgot a word at the beginning but I didn’t. The proper way to quote me would have been to write ‘[T]he Popperian view…’ and so on.

The Popperian view also says that we should prefer theories with higher corroboration.

I wrote in #109 that “Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. […] I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.”

Assume we don’t already understand how computers work, and that our starting point was the software.

Wouldn’t it be more analogous, from your POV, to say that we don’t understand how the software works, and that our starting point is the hardware? Cuz that’s what neuroscientists are doing.

In any case, it seems to me that, in popular culture, we understand more about the brain as hardware than about the mind as software. But, contrary to what I think you’re suggesting, I came up with the neo-Darwinian theory of the mind I have mentioned previously. And I did so without studying the brain ~at all, simply by making guesses about the mind and criticizing those guesses. Even though this theory is by no means complete, it has not been refuted and has been very fruitful, and it has enabled me to solve other, related problems I did not anticipate (which is a good sign!).

I’m saying a subset of algorithmic processes in the brain (whatever they may be) cause consciousness, as opposed to creative processes in the brain (whatever they may be). I don’t see how the former is ruled out as improbable.

I see – I should have placed emphasis, in my mind, on when you wrote “algorithmic” in what you had written previously; I had missed that.

I think you’d want to rule either one out as false, not as improbable. I rule out that algorithmic processes (with one exception, see below) could lead to consciousness because the mere, mindless execution of pre-existing knowledge (which is represented by those algorithms) precludes consciousness (or else it wouldn’t be mindless). The destruction of knowledge can just be done mindlessly, too. So the only option that’s left is the creation of knowledge. Which brings us back to creativity.

To be clear, whatever program gives rise to consciousness must itself be executable mindlessly, too (or else it wouldn’t give rise to but depend on consciousness). So there is one exception, and to that extent we’re in agreement. But there’s something different about that program – something our current best explanations of information processing don’t take into account yet.

To tackle this problem, the most promising approach to consciousness that I am aware of is the study of ephemeral properties of computer programs. Can you think of any such properties? I have found that to be surprisingly difficult!


I want to clarify for others reading this discussion what I mean by ‘algorithmic’. Whatever software gives rise to consciousness is still an ‘algorithm’ in the sense that a Turing machine could run it. By ‘algorithmic’ I instead mean something that doesn’t require reflection, introspection, knowledge creation, wonder – that kind of thing. Just something that can be done mindlessly. ‘Robotic’ is another word for it.

#588 · dennis (verified commenter) · in response to comment #587
Reply

Yes. As I said at the beginning, our best explanations of how calculators work don’t refer to consciousness. So whatever information processing they do does not, to our current best understanding, result in consciousness.

This is, again, an application of DD’s criterion of reality. You don’t have a refutation of it, yet you don’t want to apply it, which then leads to situations where you “can’t rule either option out”.

I don’t think DD’s criterion is appropriate here. The reason being is that even if we did come to know that calculators had accompanying consciousness, our best explanations of how calculators work wouldn’t change.

The theory should then be that modeling of both the external world and our internal world causes consciousness.

Having an “internal world”, including thoughts and particularly feelings, arguably presupposes consciousness. In which case your argument sounds circular.

Sorry, by modelling an inner world, I mean modelling one’s own body and the processes within it (such as what is happening when I touch something hot). I don’t mean to presuppose consciousness.

Why is it so hard for you to quote me properly? The first sentence makes it sound like I forgot a word at the beginning but I didn’t.

Sorry, I put a lot of the last reply together with my phone and must have made a mistake whilst copying.

I wrote in #109 that “Salmon is right to point out that there are problems with Popper’s concept of corroboration. Others have written about that. […] I think you can retain much of Popper’s epistemology just fine without accepting that concept. It’s not that important.”

True. This is a point of disagreement. Since we decided to put the epistemology discussion on hold I won’t push back too much. I will just say that even if neural correlates don’t add credence to the theory they also do not rule out the theory. Therefore I still see it as a plausible alternate theory of consciousness.

Wouldn’t it be more analogous, from your POV, to say that we don’t understand how the software works, and that our starting point is the hardware? Cuz that’s what neuroscientists are doing.

We do understand consciousness (software) as far as knowing how we can affect it. For example, this or that experience causing pleasure, pain or other conscious experience. Where we do not understand it is in questions of how/where it comes to exist and what is its function. Here I think neuroscience can give answers. Discovering that certain operations in a brain can cause a particular conscious experience would be like discovering that incrementing a register in a cpu moves the cursor along in the word processor.

I think you’d want to rule either one out as false, not as improbable. I rule out that algorithmic processes (with one exception, see below) could lead to consciousness because the mere, mindless execution of pre-existing knowledge (which is represented by those algorithms) precludes consciousness (or else it wouldn’t be mindless). The destruction of knowledge can just be done mindlessly, too. So the only option that’s left is the creation of knowledge. Which brings us back to creativity.

As I understand it your argument is as follows.

1) Consciousness results from the execution of knowledge (algorithms).
2) Consciousness doesn’t figure into our best explanations of the execution of pre-existing knowledge.
3) Therefore, as per DD’s criterion of reality, execution of pre-existing knowledge is mindless.
4) Therefore, the only remaining algorithms for causing consciousness are those that involve creating new knowledge (creativity).

I don’t see why premise (2) can’t be - Consciousness doesn’t figure into our best explanations of creative algorithms. This results in the opposite conclusions.

#589 · Kieren (people may not be who they say they are) · · Referenced in comments #592, #594
Reply

[E]ven if we did come to know that calculators had accompanying consciousness, our best explanations of how calculators work wouldn’t change.

How could they not? Discovering that calculators are conscious would be remarkable. The fact that our explanations fail to predict the consciousness of calculators would be a problem we’d want to solve. We’d want to know how it is that calculators are conscious and update our explanations of them accordingly.

Sorry, by modelling an inner world, I mean modelling one’s own body and the processes within it (such as what is happening when I touch something hot).

Why should that require or give rise to consciousness? Aren’t you just describing homeostasis? The simplest of organisms have homeostasis – organisms which you presumably do not think are conscious.

[E]ven if neural correlates don’t add credence to the theory they also do not rule out the theory. Therefore I still see it as a plausible alternate theory of consciousness.

It is, of course, true that certain neural states give rise to consciousness, but the reason neuronal correlates, or any other explanations relying on the brain, are ruled out as fundamental is computational universality: computers not made of neurons can also be conscious if programmed correctly. Therefore, such explanations can at best be parochially true. Neurons do somehow give rise to consciousness, but the fact that they’re neurons is incidental. It’s the program that matters.

Discovering that certain operations in a brain can cause a particular conscious experience would be like discovering that incrementing a register in a cpu moves the cursor along in the word processor.

Consider this alternative explanation for why the cursor moves along: because the user pressed the right-arrow key, and the program is configured to move the cursor to the right anytime that happens. While your explanation on the low, CPU level is the kind of explanation that may well be technically correct, I think mine is not just correct but also operates on a more appropriate level of emergence. This becomes important once we entertain other kinds of computers that don’t have a von Neumann architecture (which, it seems to me, the brain does not!). We also lose an understanding of causality when we go too low: it’s not really the register in the CPU that moves the cursor along, it’s the program. Recall DD’s analysis in BoI ch. 5 of Hofstadter’s program that instructs certain dominos to fall or not to fall.

I’m guessing you have read BoI ch. 5. Do you have refutations of it? Or of the CBC interview with DD I linked to?

As I understand it your argument is as follows.

1) Consciousness results from the execution of knowledge (algorithms).

Only of certain, special algorithms – and we don’t yet know what distinguishes them from conventional ones (presumably the distinguishing factor is creativity and/or ephemeral properties).

2) Consciousness doesn’t figure into our best explanations of the execution of pre-existing knowledge.

For conventional algorithms, I agree.

3) Therefore, as per DD’s criterion of reality, execution of pre-existing knowledge is mindless.
4) Therefore, the only remaining algorithms for causing consciousness are those that involve creating new knowledge (creativity).

Once we rule out the destruction of knowledge, yes.

I don’t see why premise (2) can’t be - Consciousness doesn’t figure into our best explanations of creative algorithms. This results in the opposite conclusions.

I think it can’t be “Consciousness doesn’t figure into our best explanations of creative algorithms” because consciousness has to live in one of 1) creative or 2) non-creative algorithms. For the reasons I’ve explained, I don’t think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness. Unless we’re both wrong that consciousness is real and it lives in neither category!

#590 · dennis (verified commenter) · in response to comment #589
Reply

How could they not? Discovering that calculators are conscious would be remarkable. The fact that our explanations fail to predict the consciousness of calculators would be a problem we’d want to solve. We’d want to know how it is that calculators are conscious and update our explanations of them accordingly.

It would be remarkable and it would provide insights for our theories of consciousness. However, whether our best explanations of how calculators work would change depends on whether the consciousness actually had an effect on the operation of the calculator. I don’t think it is a given that any or all conscious experience is effectual.

Why should that require or give rise to consciousness? Aren’t you just describing homeostasis? The simplest of organisms have homeostasis – organisms which you presumably do not think are conscious.

I was thinking of more complex modeling. I imagine it as the kind of modeling that is involved in self referential awareness. Modeling of the world around us and our self within that world. Modeling of the model itself, and so on. I don’t know how exactly this kind of brain processing creates a conscious experience, but I see it as a plausible alternative to - processing required for creativity causes consciousness.

Consider this alternative explanation for why the cursor moves along: because the user pressed the right-arrow key, and the program is configured to move the cursor to the right anytime that happens. While your explanation on the low, CPU level is the kind of explanation that may well be technically correct, I think mine is not just correct but also operates on a more appropriate level of emergence.

I agree with you that different levels of abstraction can be more or less appropriate for different kinds of problems. For the problems of consciousness, I think exploring multiple levels of abstraction is useful. I think studying at both the level of brain structure and at the level of abstract experience is useful for testing our theories and providing us with insights. Also, keep in mind that neurons are often discussed in terms of higher level abstractions such as regions/hemispheres of the brain. I definitely do not think that studying the brain from neurons up is THE way to understand consciousness.

For the reasons I’ve explained, I don’t think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness.

What are these reasons that you refer to? I see you calling non-creative algorithms mindless, but I don’t see an argument to support this.

#591 · Kieren (people may not be who they say they are) ·
Reply

[W]hether our best explanations of how calculators work would change depends on whether the consciousness actually had an effect on the operation of the calculator.

No. Just the introduction of the word ‘consciousness’, effectual or not, into our explanations of calculators would be a change. In which case the use of DD’s criterion of reality would be appropriate after all, thereby negating #589.

I was thinking of more complex modeling.

Why should sufficient complexity give rise to consciousness?

I imagine it as the kind of modeling that is involved in self referential awareness. Modeling of the world around us and our self within that world.

Again, video-game NPCs do this stuff all the time and our best explanations of them do not invoke consciousness. So they’re not conscious. (I know I’m repeating myself but more on the criterion of reality below.)

Modeling of the model itself […].

And presumably of the modeling of the modeling of the modeling…? Sounds like an infinite regress. If it isn’t, how many levels are required for consciousness?

For the reasons I’ve explained, I don’t think consciousness can live in non-creative algorithms, so, per the law of the excluded middle, creative algorithms are the only potential home left for consciousness.

What are these reasons that you refer to?

Calculators, NPCs, criterion of reality.

It seems to me we have two disagreements, each on a different level. On a basic level, it seems to me we need to break symmetry between the claims ‘sufficiently complex modeling of oneself and one’s surroundings gives rise to consciousness’ and ‘creativity gives rise to consciousness’. On a more general level, we have an epistemological disagreement re the criterion of reality and whether its use is appropriate in this context. (I think I have shown at the beginning of this comment that it is.)

Do you think that’s an accurate summary of the disagreement? It seems to me that, to break symmetry between the two claims, it would be helpful to find a resolution re the criterion of reality first (cuz if we don’t have some criterion for what’s real that we are willing to follow without exception we can always ignore criticism as invoking something that isn’t real).

#592 · dennis (verified commenter) · in response to comment #591
Reply

No. Just the introduction of the word ‘consciousness’, effectual or not, into our explanations of calculators would be a change. In which case the use of DD’s criterion of reality would be appropriate after all, thereby negating #589.

I don’t see why the word ‘consciousness’ would appear in the explanation. If the functionality of the calculator is the same as before we knew it was conscious, then I don’t see what adding “and also the calculator is conscious” does to improve the explanation? Would I be able to calculate my taxes better with such an explanation? If given the knowledge that calculators are conscious, I think our explanation of how consciousness works would change, not our explanation of how calculators work.

Another way of stating my problem with applying DD’s criterion is as follows. As per my best understanding of consciousness, it appears it can exist without leaving a trace (since the experience is private). Therefore, I don’t expect all other consciousnesses to appear as things that require explaining. Instead I expect to know about other consciousness as a result of my best explanation of consciousness. If our best explanation of consciousness does not infer calculator consciousness, then as per DD’s criterion we can know that it does not exist.

Why should sufficient complexity give rise to consciousness?

To quote the blog post you linked. “Again, no matter how sophisticated an inborn algorithm is, since it can be executed mindlessly, in computer fashion, that sophistication cannot be evidence of consciousness.”

The word “mindlessly” here is doing all of the work, but whether all inborn algorithms are mindless (without consciousness) is what I am asking you to substantiate at the moment.

Again, video-game NPCs do this stuff all the time and our best explanations of them do not invoke consciousness. So they’re not conscious. (I know I’m repeating myself but more on the criterion of reality below.)

Similar to the calculator. My explanation of how the NPC works would be in terms of path finding algorithms and state machines, and not in terms of any conscious experience that happens to be conjured up and along for the ride. If I came to know that the NPC was conscious, my best explanations of how the NPC works would not change.

And presumably of the modeling of the modeling of the modeling…? Sounds like an infinite regress. If it isn’t, how many levels are required for consciousness?

How many levels… I don’t know yet.

Do you think that’s an accurate summary of the disagreement? It seems to me that, to break symmetry between the two claims, it would be helpful to find a resolution re the criterion of reality first (cuz if we don’t have some criterion for what’s real that we are willing to follow without exception we can always ignore criticism as invoking something that isn’t real).

I think it is accurate. Hopefully my replies above can help us come to the resolution you refer to.

#594 · Kieren (people may not be who they say they are) · in response to comment #592
Reply

I don’t see why the word ‘consciousness’ would appear in the explanation.

Presumably for the same reason you want the word ‘consciousness’ to appear in the explanation for how animals work.

Calculators are math machines, animals are gene-spreading machines (per Dawkins). You ask whether you’d be able to calculate your taxes better if consciousness figured in our best explanations of how calculators work, but you don’t ask whether animals would be able to spread their genes better if consciousness figured in our best explanations of how they work. And yet, presumably, your answer to the latter question would be ‘yes’ whereas your implied answer to the former is ‘no’. How does that fit together?

If given the knowledge that calculators are conscious, I think our explanation of how consciousness works would change, not our explanation of how calculators work.

My guess is they’d both change, but at least our best explanations of calculators would have a big unknown (‘why are they conscious?’), and that unknown would form at least an implicit part of such explanations. That would be an improvement at least in the sense that there’d be a pointer toward an open problem and more progress.

As per my best understanding of consciousness, it appears it can exist without leaving a trace (since the experience is private).

If it doesn’t leave a trace, that means even the brain’s hardware remains unchanged. So what good is neuroscience?

Your statement is vague; it leaves room for evasions when you encounter criticism. I think it would be better to phrase it in decisive, more attackable terms, such as ‘consciousness can never leave a trace’ or at least ‘consciousness only leaves a trace when…’.

The statement ‘consciousness can never leaves a trace’, for example, sounds false because if someone experiences pain, say, they usually want to fix that, and then do stuff that helps fix that (eg move out of an uncomfortable position into a more comfortable one). At which point there’s a trace even though the experience is totally private.

Otherwise it’s like saying, in OOP terms: private methods on a class never cause any side effects. Which isn’t true.

If our best explanation of consciousness does not infer calculator consciousness, then as per DD’s criterion we can know that it does not exist.

Now it sounds like you’ve adopted (and applied) the criterion?!

The word “mindlessly” here is doing all of the work, but whether all inborn algorithms are mindless (without consciousness) is what I am asking you to substantiate at the moment.

When something may as well have been done mindlessly, it cannot be evidence of consciousness. So I don’t need to substantiate. We need some behavior which must have been the result of consciousness. You disagree that consciousness necessarily has any behavioral impact, but that makes things more difficult for you because then you can’t point at any animal behavior and say that must have been the result of consciousness. In which case animals may as well not be conscious. Or anything at all may as well be conscious, including rocks, planets, and so on.

If I came to know that the NPC was conscious, my best explanations of how the NPC works would not change.

How could you come to know that if not through explanations?

#595 · dennis (verified commenter) · in response to comment #594
Reply

Maybe we can look at this another way. Earlier you quoted the criterion of reality as follows.

[W]e should conclude that a particular thing is real if and only if it figures in our best explanation of something.

Therefore, for us to know that calculator consciousness is not real, we would have to know that it does not figure into any of our best explanations. If calculator consciousness figures into my best explanation of consciousness and not yours, then we disagree about calculator consciousness. However, if you want to point at a lack of calculator/NPC consciousness to refute my alternate theories’ claims about the connection between information processing and consciousness, then you are doing so by assuming your theory of consciousness is true. This is circular because you are assuming your theory is true whilst it is under question.

If this doesn’t help us to get to a resolution then I will reply to each of your previous responses also.

#596 · Kieren (people may not be who they say they are) ·
Reply

Therefore, for us to know that calculator consciousness is not real, we would have to know that it does not figure into any of our best explanations.

For clarity, I think there are two possibilities: that calculators are conscious either follows from our best explanation of consciousness, or it follows from our best explanation of calculators.

[I]f you want to point at a lack of calculator/NPC consciousness to refute my alternate theories’ claims about the connection between information processing and consciousness, then you are doing so by assuming your theory of consciousness is true. This is circular because you are assuming your theory is true whilst it is under question.

This is the standard Popperian approach: we assume, tentatively, that a conjecture is true until it is refuted, even if that conjecture is currently “under question”. I don’t see how that leads to circularity. Since all our conjectures are always tentative in this way, they’re always “under question”/open to revision anyway.

If this doesn’t help us to get to a resolution then I will reply to each of your previous responses also.

Sure.

An alternate, if lesser, resolution is that we simply have different epistemologies; that it’s going to be difficult for us to come to a resolution on the question of animal consciousness until we resolve the epistemological difference. That’s not surprising since the question of animal consciousness is directly influenced by epistemological considerations. But we still got to understand each other’s (and our own) viewpoints better, which, as Popper would say, is more than enough.

#597 · dennis (verified commenter) · in response to comment #596
Reply

This is the standard Popperian approach: we assume, tentatively, that a conjecture is true until it is refuted, even if that conjecture is currently “under question”. I don’t see how that leads to circularity. Since all our conjectures are always tentative in this way, they’re always “under question”/open to revision anyway.

The Popperian approach is to assume that a conjecture is true if it is the best conjecture we have. If we have N equally plausible explanations then we need to break symmetry. You cannot break symmetry by assuming one of the N explanations is true, and deriving consequences from it to refute the other explanations. The choice of explanation would be arbitrary then.

I see this happening in our current discussion as follows.
D: ‘Creative algorithms cause consciousness’ is the best explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousness.

An alternate, if lesser, resolution is that we simply have different epistemologies; that it’s going to be difficult for us to come to a resolution on the question of animal consciousness until we resolve the epistemological difference.

I still think I can proceed within a Popperian framework, so I’d prefer to continue avoiding the epistemology discussion if possible.

#598 · Kieren (people may not be who they say they are) · in response to comment #597
Reply

The Popperian approach is to assume that a conjecture is true if it is the best conjecture we have.

No. That sounds like a justificationist perversion of Popperian epistemology because it would involve ‘weighing’ conjectures somehow based on how ‘good’ they are. DD explains in BoI ch. 13 why that’s a bad idea. (Ironically – and, IIRC, Elliot points this out somewhere – that means DD himself is a justificationist since he wants to weigh and choose explanations based on how “good” (“hard to vary”) they are, as opposed to choosing based on whether they are refuted vs. non-refuted, which I believe would be Elliot’s approach, ie the binary approach, which I am advocating.)

If we have N equally plausible explanations then we need to break symmetry. You cannot break symmetry by assuming one of the N explanations is true, and deriving consequences from it to refute the other explanations. The choice of explanation would be arbitrary then.

This I agree with, and I see now why you thought my argument was circular. Some other explanation is needed: either from our background knowledge or a new one. Either way it can’t be one of the conflicting theories. But I’m not choosing one of those (see below). Note also that Popperian epistemology gives us a process of elimination that leaves us, ideally, with one non-refuted conjecture, which need not be the ‘best’ (depending on how you weigh).

I see this happening in our current discussion as follows.
D: ‘Creative algorithms cause consciousness’ is the best explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousness.

That isn’t my argument; I think there’s been a misunderstanding. Here’s how I’d change your description of our discussion:

D: ‘Creative algorithms cause consciousness’ is the bestonly explanation because there are no plausible alternatives.
K: ‘Non-creative algorithms cause consciousness’ is a plausible alternative.
D: That would mean that calculators and NPC’s are conscious, which we know they are not because we tentatively assume that only creative algorithms can cause consciousnessour best explanations of calculators and NPCs do not invoke consciousness, so, per DD’s criterion of reality, they really aren’t conscious.

I invoke these explanations to show that the claim that “‘[n]on-creative algorithms cause consciousness’ is a plausible alternative” must be false, by modus tollens, since that claim makes a prediction about calculators that isn’t true. Hence the claim is eliminated, I think, while DD’s claim – that creativity causes consciousness – is still standing.

As you can see, the circularity you spoke of is not there as I do not reference the theory under question as a symmetry breaker. Instead, I refer to our best explanations of calculators and NPCs as well as the criterion of reality and the modus tollens, all four of which form part of my background knowledge.

I still think I can proceed within a Popperian framework […].

Maybe I’m talking out of my ass here, but I don’t think you understand it well enough (see the beginning of this comment). You’d be proceeding with what you think is a Popperian ‘framework’ but is actually justificationism in Popperian clothing.

I’d like to honor your request to avoid the epistemology discussion (though you’ve already continued it with your comments around how not to break symmetry), but I don’t currently see how to avoid it. Perhaps a way forward is a discussion around the criterion of reality in particular and understanding better our apparent disagreement around that criterion? If you have other ideas, I’m open to them, too.

#599 · dennis (verified commenter) · in response to comment #598 · Referenced in comment #613
Reply

I’m continuing to ignore the disagreements about epistemology, but you are correct that we might have to get into it soon.

I invoke these explanations to show that the claim that “‘[n]on-creative algorithms cause consciousness’ is a plausible alternative” must be false, by modus tollens, since that claim makes a prediction about calculators that isn’t true. Hence the claim is eliminated, I think, while DD’s claim – that creativity causes consciousness – is still standing.

As you can see, the circularity you spoke of is not there as I do not reference the theory under question as a symmetry breaker. Instead, I refer to our best explanations of calculators and NPCs as well as the criterion of reality and the modus tollens, all four of which form part of my background knowledge.

Ok, thanks for clearing this up. You are correct that it is not circular reasoning. Instead I think it is an incorrect application of the criterion. According to the criterion, something is not real if it doesn’t exist in any of our best explanations. Ignoring our best explanations of consciousness, considering only our other background knowledge, how calculators operate etc, yes, calculator consciousness does not factor in, and therefore we know it doesn’t exist. However, we both have our own best explanations of consciousness to consider. If my theory of consciousness infers calculator consciousness, then as per the criterion, it exists (for me), and this happens without conflicting with background knowledge. On the other hand, your best theory of consciousness does not infer calculator consciousness, and so we end up disagreeing about calculator consciousness. We disagree about calculator consciousness because our theories of consciousness differ. Symmetry is not yet broken.

I hope that makes sense. Basically I’m saying that the criterion requires that we consider all of our current best theories, not a subset, which is what I now see you doing.

#600 · Kieren (people may not be who they say they are) ·
Reply

[W]e both have our own best explanations of consciousness to consider.

Why consider a refuted explanation?

The quote from BoI chapter 1 goes:

[A] particular thing is real if and only if it figures in our best explanation of something.

Note the singular “explanation”. And a refuted explanation can’t be good anymore – the following quote (also from chapter 1) is about science in particular but you can easily imagine how it applies to symmetry breaking in general:

When a formerly good explanation has been falsified by new observations, it is no longer a good explanation, because the problem has expanded to include those observations. Thus the standard scientific methodology of dropping theories when refuted by experiment is implied by the requirement for good explanations.

With that said, back to your comment:

Basically I’m saying that the criterion requires that we consider all of our current best theories, not a subset, which is what I now see you doing.

Our current best theories do not include refuted ones. So I don’t think my application of the criterion is incorrect.

A refuted theory does not deserve consideration until the refutation is counter-refuted. Four potential targets to attack my refutation are:

  1. Explanations of calculators
  2. Explanations of NPCs
  3. The criterion of reality
  4. Modus tollens

If you refute any one of them, my refutation is invalid, and then your theory regains the status ‘non-refuted’ and can thus be reconsidered.

With that in mind, when you wrote a bit further up…

If my theory of consciousness infers calculator consciousness, then as per the criterion, it exists (for me), and this happens without conflicting with background knowledge.

…even if non-refuted, that theory conflicts with explanations of calculators, which presumably form part of your background knowledge (unless you successfully refute them as per point 1 in the above list). If they do, you’d first want to break symmetry there.


By the way, a better way to state the criterion, ie in a non-justificationist way, IMO, is to simply say ‘something is real if and only if it figures in a non-refuted explanation of something’ – that phrasing also happens to leave room for multiple non-refuted explanations.

#601 · dennis (verified commenter) · in response to comment #600
Reply

[W]e both have our own best explanations of consciousness to consider.

Why consider a refuted explanation?

Our current best theories do not include refuted ones. So I don’t think my application of the criterion is incorrect.

An explanation is not refuted at the moment it is conjectured simply because it has implications that are not implied by any existing explanations. We would be unable to progress if this was the case.

As an analogy. Imagine scientists first conjecturing theories involving charged particles to explain aspects of electricity. Surely you don’t think these theories are refuted because charged particles don’t figure into any of our existing explanations?

…even if non-refuted, that theory conflicts with explanations of calculators, which presumably form part of your background knowledge (unless you successfully refute them as per point 1 in the above list). If they do, you’d first want to break symmetry there.

I don’t see the conflict. If it were true that the algorithmic processing occurring within a calculator was sufficient to create some kind of conscious experience, why would it be necessary for such a consciousness to have any effect on the operation of the calculator?

#602 · Kieren (people may not be who they say they are) ·
Reply

An explanation is not refuted at the moment it is conjectured simply because it has implications that are not implied by any existing explanations.

Agreed. That just means there’s a conflict; an opportunity to break symmetry.

Imagine scientists first conjecturing theories involving charged particles to explain aspects of electricity. Surely you don’t think these theories are refuted because charged particles don’t figure into any of our existing explanations?

Correct, I do not. But, per Popper, new theories should explain, at least implicitly, why their predecessors are wrong. Which is what I’ve suggested as one of the four attack vectors: you could explain why our explanations of calculators are wrong; why they don’t imply the absence of consciousness (which you seem to attempt below anyway in your remark about calculator operations). That way, you would break symmetry in favor of the prediction that calculators are (or at least might be) conscious, and then modus tollens doesn’t rule out anymore that non-creative algorithms create consciousness.

If it were true that the algorithmic processing occurring within a calculator was sufficient to create some kind of conscious experience, why would it be necessary for such a consciousness to have any effect on the operation of the calculator?

If you’re repeating that our explanations (not just operations) of calculators need not change if calculators are conscious, then I repeat that you then also shouldn’t think our explanations of animals should change if they are (or aren’t) conscious. But it seems that you do want them to change in the case of animals. And you also want them to change in the case of the human brain (where you don’t restrict yourself just to operations but want neuroscience to explain how those operations result in consciousness, and such an explanation would then form part of the explanation of the brain). So, if only for consistency – but also in an attempt to understand reality – you should want them to change when it comes to calculators, too.

It sounds like you have a somewhat instrumentalist view of explanations (when it suits you), which leads you to reduce explanations of calculators to a description of their operations only. But that isn’t a valid way around my application of the modus tollens.

#603 · dennis (verified commenter) · in response to comment #602
Reply

PS: Re when you wrote:

An explanation is not refuted at the moment it is conjectured simply because it has implications that are not implied by any existing explanations.

To be clear, it’s not just that existing explanations do not imply calculator consciousness (although that would be enough of a challenge) – together, the four pieces of background knowledge I’ve referenced rule out calculator consciousness.

#604 · dennis (verified commenter) · in response to comment #603
Reply

If you’re repeating that our explanations (not just operations) of calculators need not change if calculators are conscious, then I repeat that you then also shouldn’t think our explanations of animals should change if they are (or aren’t) conscious.

The only explanation about animals that would change would be the ones that have something to do with their consciousness (if with had any at all). E.g. the horse bucks and neighs when it experiences pain. Existing explanations about how the animals muscles, organs, and neurons function would not change. Likewise for the calculator, only those explanations that have something to do with calculator consciousness would change.

Before any implications of calculator consciousness we had perfectly fine explanations about how the different buttons relate to certain mathematical operations. We had explanations about how these operations are implemented with logic gates, and explanations about how these logic gates are constructed in hardware. None of these explanations conflict with my theory of consciousness, because they don’t say anything about consciousness. Considered alongside my theory, they are all still perfectly good at explaining what they were intended to explain. The only thing that has changed is that we now have an explanation of consciousness that implies that calculators might be conscious. Note that this is not an explanation about calculator consciousness, but instead a general theory of consciousness that implies that calculators may be conscious.

Can you provide me with an existing explanation of calculators that conflicts (would need changing) with the implication that calculators might be conscious? The only explanation I can think of is the theory of consciousness (the thing under question).

And you also want them to change in the case of the human brain (where you don’t restrict yourself just to operations but want neuroscience to explain how those operations result in consciousness, and such an explanation would then form part of the explanation of the brain).

I don’t want the existing explanations of neurons, brain processing, etc to change. I want additional explanations that explain how such activity creates consciousness. If it turns out that the existing explanations are inadequate and must be changed to explain consciousness then so be it. It is a possibility and I am open to it.

#605 · Kieren (people may not be who they say they are) · in response to comment #604
Reply

The only explanation about animals that would change would be the ones that have something to do with their consciousness (if with [sic] had any at all). E.g. the horse bucks and neighs when it experiences pain. Existing explanations about how the animals [sic] muscles, organs, and neurons function would not change. Likewise for the calculator, only those explanations that have something to do with calculator consciousness would change.

Of course, if you divvy them up into those explanations that have something to do with consciousness and those that do not, then only some of them are going to change. But for animals/calculators as a whole, the explanations would change. (Imagine how much our explanations of humans would change if we learned that humans are not conscious! We wouldn’t then say ‘but explanations of our muscles remained the same’.) In a related context, you wanted me to “consider all of our current best theories, not a subset”, ie have a holistic picture – now you want me to consider only subsets of theories about calculators.

None of these explanations [about calculators] conflict with my theory of consciousness, because they don’t say anything about consciousness.

The fact that none of our explanations of calculators say anything about conscious is the very reason you should think they’re not conscious. And again, the four concepts from our background knowledge taken together do show that calculators really aren’t conscious, and then they do conflict with your theory of consciousness because it predicts that calculators are conscious.

Note that this is not an explanation about calculator consciousness, but instead a general theory of consciousness that implies that calculators may be conscious.

Again, I suggest phrasing things in more absolute terms such as ‘must’, ‘cannot’ etc. If calculators may be conscious it’s too easy to evade criticism. If you’re going to constrain it, specify under which conditions calculators must be conscious and why and under which conditions they cannot be conscious and why not.

But yes, I believe I understand your point here: you’re saying our explanations of the operation of calculators, their hardware, and so on would not change. However, I do think that, if we had a working theory of consciousness that implied that even calculators are conscious, people would get busy trying to understand what it is about calculators that makes them conscious and then amend our explanations of calculators as a whole accordingly, at the very least by adding an implicit reference to such a working theory of consciousness.

Can you provide me with an existing explanation of calculators that conflicts (would need changing) with the implication that calculators might be conscious? The only explanation I can think of is the theory of consciousness (the thing under question).

I think any explanation of calculators as a whole would need to be amended to include how their functionality gives rise to consciousness, at least by implicitly referencing your theory of consciousness. I don’t know in detail what our current explanation of calculators looks like – I don’t manufacture calculators; I would just refer to higher level concepts such as basic arithmetic on a programming level – but I do know that that explanation doesn’t currently speak of consciousness, or else it would be commonly thought that calculators are conscious. I also don’t know in detail what the amendment would look like since I don’t know how consciousness works. Of course, tautologically, the sub explanations that don’t have to do with consciousness aren’t going to need to change, not even by implicit reference. (Note that this is another reason I don’t find neuroscience promising when it comes to the brain; presumably, explanations of wouldn’t change.)

Please provide a counter-refutation to my refutation. Otherwise, I think it’s likely that we’re going to reach an impasse. If you’re not sure yet how to refute it, asking questions about it or steelmanning it could be a good way forward.

#606 · dennis (verified commenter) · in response to comment #605
Reply

In a related context, you wanted me to “consider all of our current best theories, not a subset”, ie have a holistic picture – now you want me to consider only subsets of theories about calculators.

I still want you to consider all theory. I just wanted to make a distinction between theories that refer to consciousness and theories that don’t. It seems you agree that such a distinction can be made.

The fact that none of our explanations of calculators say anything about conscious is the very reason you should think they’re not conscious.

My theory provides an explanation for why none of our existing theories say anything about calculator consciousness. The explanation is as follows: Consciousness is created as a result of information processing. It does not necessarily have any effect on the information processing. Therefore our existing theories were able to perfectly explain such information processing at many different levels of explanation without reference to consciousness.

You are correct that prior to my sort of explanation of consciousness, calculator consciousness was ruled out. However, as shown above, a world view that incorporates my theory of consciousness does so without conflict.

However, I do think that, if we had a working theory of consciousness that implied that even calculators are conscious, people would get busy trying to understand what it is about calculators that makes them conscious and then amend our explanations of calculators as a whole accordingly, at the very least by adding an implicit reference to such a working theory of consciousness.

If after accepting my theory they think they can improve other theories then they should definitely try.

I think any explanation of calculators as a whole would need to be amended to include how their functionality gives rise to consciousness, at least by implicitly referencing your theory of consciousness.

We would create additional explanations to bridge the explanations of calculators and my explanations of consciousness. This is how we would come to know about calculator consciousness in the first place. For example, calculators perform arithmetic by the following computations, and as per Kieren’s explanation of consciousness this is sufficient to produce a consciousness.

Please provide a counter-refutation to my refutation.

My counter refutation is that the lack of mention of NPC or calculator consciousness in our existing theories is explained by my theory (earlier in this post). Whilst DD’s criterion allowed us to rule out calculator/NPC consciousness up until now, once we incorporate my theory this is no longer the case.

#607 · Kieren (people may not be who they say they are) · in response to comment #606
Reply

Consciousness is created as a result of information processing. It does not necessarily have any effect on the information processing.

First, please explain when consciousness must have an effect on information processing and when it can’t. Otherwise it’s too hand wavy to work as a counter refutation. (And even then such an explanation is only necessary but maybe not sufficient, I’ll have to think more about it.)

Second, you seem to be saying that the only reason consciousness would figure into our best explanations of calculators is if consciousness had an effect on information processing. Why must that be the case? Why couldn’t consciousness figure into our best explanations of calculators for other reasons, despite not having any effect?

Third, you say that “[c]onsciousness is created as a result of information processing.” All information processing?

#608 · dennis (verified commenter) · in response to comment #607
Reply

PS: Thinking more about this, overall, I can see that your claim that ‘calculators are conscious after all; our best explanations of them just didn’t need to mention consciousness’ conflicts with my refutation, but why should we break symmetry in favor of the former and not the latter? It seems to me that it’s not really a counter refutation unless it explains that, too.

#609 · dennis (verified commenter) · in response to comment #608
Reply

First, please explain when consciousness must have an effect on information processing and when it can’t. Otherwise it’s too hand wavy to work as a counter refutation.

My theory doesn’t answer this question yet. Does your theory?

Second, you seem to be saying that the only reason consciousness would figure into our best explanations of calculators is if consciousness had an effect on information processing.

Not just information processing. I use it as an example. The reasoning applies to all our other explanations of things we up until now thought of as unconscious. Again, if you have an example of an explanation that says something that conflicts with my theory of consciousness, then provide it.

Third, you say that “[c]onsciousness is created as a result of information processing.” All information processing?

The deepest I have conjectured is that it is the process of self referential modeling.

PS: Thinking more about this, overall, I can see that your claim that ‘calculators are conscious after all; our best explanations of them just didn’t need to mention consciousness’ conflicts with my refutation, but why should we break symmetry in favor of the former and not the latter?

Which refutation are you referring to here? And how does it conflict?

#612 · Kieren (people may not be who they say they are) ·
Reply

[Dennis:] First, please explain when consciousness must have an effect on information processing and when it can’t. Otherwise it’s too hand wavy to work as a counter refutation.

[Kieren:] My theory doesn’t answer this question yet. Does your theory?

I don’t think it matters cuz I’m not currently trying to counter refute my own refutation :)

Again, if you have an example of an explanation that says something that conflicts with my theory of consciousness, then provide it.

My original refutation?

[Dennis:] Third, you say that “[c]onsciousness is created as a result of information processing.” All information processing?

[Kieren:] The deepest I have conjectured is that it is the process of self referential modeling.

Which calculators don’t have, right? In which case calculators aren’t conscious after all.

Which refutation are you referring to here? And how does it conflict?

#599. It conflicts because my refutation showed by invoking modus tollens etc. that calculators are not conscious.

#613 · dennis (verified commenter) · in response to comment #612
Reply

Why have you stopped discussing?

#618 · dennis (verified commenter) ·
Reply

For various reasons I got distracted from this discussion for a long time. Based on your reaction to my last hiatus I figured you probably weren’t interested in picking things back up. If you want me to respond to your last post I can.

#619 · Kieren (people may not be who they say they are) ·
Reply

You figured right; let’s conclude the discussion with an impasse due to insufficient interest on both sides (though for different reasons).

#620 · dennis (verified commenter) · in response to comment #619
Reply

What are your thoughts?

You are responding to comment #. Clear

Preview

Markdown supported. cmd + enter to comment. Your comment will appear upon approval. You are responsible for what you write. Terms, privacy policy
This small puzzle helps protect the blog against automated spam.

Preview